Nov 26 17:15:19 np0005537197 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 26 17:15:19 np0005537197 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 26 17:15:19 np0005537197 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 17:15:19 np0005537197 kernel: BIOS-provided physical RAM map:
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 26 17:15:19 np0005537197 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 26 17:15:19 np0005537197 kernel: NX (Execute Disable) protection: active
Nov 26 17:15:19 np0005537197 kernel: APIC: Static calls initialized
Nov 26 17:15:19 np0005537197 kernel: SMBIOS 2.8 present.
Nov 26 17:15:19 np0005537197 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 26 17:15:19 np0005537197 kernel: Hypervisor detected: KVM
Nov 26 17:15:19 np0005537197 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 26 17:15:19 np0005537197 kernel: kvm-clock: using sched offset of 9101365839 cycles
Nov 26 17:15:19 np0005537197 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 26 17:15:19 np0005537197 kernel: tsc: Detected 2799.998 MHz processor
Nov 26 17:15:19 np0005537197 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 26 17:15:19 np0005537197 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 26 17:15:19 np0005537197 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 26 17:15:19 np0005537197 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 26 17:15:19 np0005537197 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 26 17:15:19 np0005537197 kernel: Using GB pages for direct mapping
Nov 26 17:15:19 np0005537197 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 26 17:15:19 np0005537197 kernel: ACPI: Early table checksum verification disabled
Nov 26 17:15:19 np0005537197 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 26 17:15:19 np0005537197 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 17:15:19 np0005537197 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 17:15:19 np0005537197 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 17:15:19 np0005537197 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 26 17:15:19 np0005537197 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 17:15:19 np0005537197 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 26 17:15:19 np0005537197 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 26 17:15:19 np0005537197 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 26 17:15:19 np0005537197 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 26 17:15:19 np0005537197 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 26 17:15:19 np0005537197 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 26 17:15:19 np0005537197 kernel: No NUMA configuration found
Nov 26 17:15:19 np0005537197 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 26 17:15:19 np0005537197 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Nov 26 17:15:19 np0005537197 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 26 17:15:19 np0005537197 kernel: Zone ranges:
Nov 26 17:15:19 np0005537197 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 26 17:15:19 np0005537197 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 26 17:15:19 np0005537197 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 26 17:15:19 np0005537197 kernel:  Device   empty
Nov 26 17:15:19 np0005537197 kernel: Movable zone start for each node
Nov 26 17:15:19 np0005537197 kernel: Early memory node ranges
Nov 26 17:15:19 np0005537197 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 26 17:15:19 np0005537197 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 26 17:15:19 np0005537197 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 26 17:15:19 np0005537197 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 26 17:15:19 np0005537197 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 26 17:15:19 np0005537197 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 26 17:15:19 np0005537197 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 26 17:15:19 np0005537197 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 26 17:15:19 np0005537197 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 26 17:15:19 np0005537197 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 26 17:15:19 np0005537197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 26 17:15:19 np0005537197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 26 17:15:19 np0005537197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 26 17:15:19 np0005537197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 26 17:15:19 np0005537197 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 26 17:15:19 np0005537197 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 26 17:15:19 np0005537197 kernel: TSC deadline timer available
Nov 26 17:15:19 np0005537197 kernel: CPU topo: Max. logical packages:   8
Nov 26 17:15:19 np0005537197 kernel: CPU topo: Max. logical dies:       8
Nov 26 17:15:19 np0005537197 kernel: CPU topo: Max. dies per package:   1
Nov 26 17:15:19 np0005537197 kernel: CPU topo: Max. threads per core:   1
Nov 26 17:15:19 np0005537197 kernel: CPU topo: Num. cores per package:     1
Nov 26 17:15:19 np0005537197 kernel: CPU topo: Num. threads per package:   1
Nov 26 17:15:19 np0005537197 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 26 17:15:19 np0005537197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 26 17:15:19 np0005537197 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 26 17:15:19 np0005537197 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 26 17:15:19 np0005537197 kernel: Booting paravirtualized kernel on KVM
Nov 26 17:15:19 np0005537197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 26 17:15:19 np0005537197 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 26 17:15:19 np0005537197 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 26 17:15:19 np0005537197 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 26 17:15:19 np0005537197 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 17:15:19 np0005537197 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 26 17:15:19 np0005537197 kernel: random: crng init done
Nov 26 17:15:19 np0005537197 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: Fallback order for Node 0: 0 
Nov 26 17:15:19 np0005537197 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 26 17:15:19 np0005537197 kernel: Policy zone: Normal
Nov 26 17:15:19 np0005537197 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 26 17:15:19 np0005537197 kernel: software IO TLB: area num 8.
Nov 26 17:15:19 np0005537197 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 26 17:15:19 np0005537197 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 26 17:15:19 np0005537197 kernel: ftrace: allocated 193 pages with 3 groups
Nov 26 17:15:19 np0005537197 kernel: Dynamic Preempt: voluntary
Nov 26 17:15:19 np0005537197 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 26 17:15:19 np0005537197 kernel: rcu: #011RCU event tracing is enabled.
Nov 26 17:15:19 np0005537197 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 26 17:15:19 np0005537197 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 26 17:15:19 np0005537197 kernel: #011Rude variant of Tasks RCU enabled.
Nov 26 17:15:19 np0005537197 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 26 17:15:19 np0005537197 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 26 17:15:19 np0005537197 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 26 17:15:19 np0005537197 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 26 17:15:19 np0005537197 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 26 17:15:19 np0005537197 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 26 17:15:19 np0005537197 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 26 17:15:19 np0005537197 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 26 17:15:19 np0005537197 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 26 17:15:19 np0005537197 kernel: Console: colour VGA+ 80x25
Nov 26 17:15:19 np0005537197 kernel: printk: console [ttyS0] enabled
Nov 26 17:15:19 np0005537197 kernel: ACPI: Core revision 20230331
Nov 26 17:15:19 np0005537197 kernel: APIC: Switch to symmetric I/O mode setup
Nov 26 17:15:19 np0005537197 kernel: x2apic enabled
Nov 26 17:15:19 np0005537197 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 26 17:15:19 np0005537197 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 26 17:15:19 np0005537197 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 26 17:15:19 np0005537197 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 26 17:15:19 np0005537197 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 26 17:15:19 np0005537197 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 26 17:15:19 np0005537197 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 26 17:15:19 np0005537197 kernel: Spectre V2 : Mitigation: Retpolines
Nov 26 17:15:19 np0005537197 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 26 17:15:19 np0005537197 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 26 17:15:19 np0005537197 kernel: RETBleed: Mitigation: untrained return thunk
Nov 26 17:15:19 np0005537197 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 26 17:15:19 np0005537197 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 26 17:15:19 np0005537197 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 26 17:15:19 np0005537197 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 26 17:15:19 np0005537197 kernel: x86/bugs: return thunk changed
Nov 26 17:15:19 np0005537197 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 26 17:15:19 np0005537197 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 26 17:15:19 np0005537197 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 26 17:15:19 np0005537197 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 26 17:15:19 np0005537197 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 26 17:15:19 np0005537197 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 26 17:15:19 np0005537197 kernel: Freeing SMP alternatives memory: 40K
Nov 26 17:15:19 np0005537197 kernel: pid_max: default: 32768 minimum: 301
Nov 26 17:15:19 np0005537197 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 26 17:15:19 np0005537197 kernel: landlock: Up and running.
Nov 26 17:15:19 np0005537197 kernel: Yama: becoming mindful.
Nov 26 17:15:19 np0005537197 kernel: SELinux:  Initializing.
Nov 26 17:15:19 np0005537197 kernel: LSM support for eBPF active
Nov 26 17:15:19 np0005537197 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 26 17:15:19 np0005537197 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 26 17:15:19 np0005537197 kernel: ... version:                0
Nov 26 17:15:19 np0005537197 kernel: ... bit width:              48
Nov 26 17:15:19 np0005537197 kernel: ... generic registers:      6
Nov 26 17:15:19 np0005537197 kernel: ... value mask:             0000ffffffffffff
Nov 26 17:15:19 np0005537197 kernel: ... max period:             00007fffffffffff
Nov 26 17:15:19 np0005537197 kernel: ... fixed-purpose events:   0
Nov 26 17:15:19 np0005537197 kernel: ... event mask:             000000000000003f
Nov 26 17:15:19 np0005537197 kernel: signal: max sigframe size: 1776
Nov 26 17:15:19 np0005537197 kernel: rcu: Hierarchical SRCU implementation.
Nov 26 17:15:19 np0005537197 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 26 17:15:19 np0005537197 kernel: smp: Bringing up secondary CPUs ...
Nov 26 17:15:19 np0005537197 kernel: smpboot: x86: Booting SMP configuration:
Nov 26 17:15:19 np0005537197 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 26 17:15:19 np0005537197 kernel: smp: Brought up 1 node, 8 CPUs
Nov 26 17:15:19 np0005537197 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 26 17:15:19 np0005537197 kernel: node 0 deferred pages initialised in 13ms
Nov 26 17:15:19 np0005537197 kernel: Memory: 7765988K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616268K reserved, 0K cma-reserved)
Nov 26 17:15:19 np0005537197 kernel: devtmpfs: initialized
Nov 26 17:15:19 np0005537197 kernel: x86/mm: Memory block size: 128MB
Nov 26 17:15:19 np0005537197 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 26 17:15:19 np0005537197 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: pinctrl core: initialized pinctrl subsystem
Nov 26 17:15:19 np0005537197 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 26 17:15:19 np0005537197 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 26 17:15:19 np0005537197 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 26 17:15:19 np0005537197 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 26 17:15:19 np0005537197 kernel: audit: initializing netlink subsys (disabled)
Nov 26 17:15:19 np0005537197 kernel: audit: type=2000 audit(1764195317.941:1): state=initialized audit_enabled=0 res=1
Nov 26 17:15:19 np0005537197 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 26 17:15:19 np0005537197 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 26 17:15:19 np0005537197 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 26 17:15:19 np0005537197 kernel: cpuidle: using governor menu
Nov 26 17:15:19 np0005537197 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 26 17:15:19 np0005537197 kernel: PCI: Using configuration type 1 for base access
Nov 26 17:15:19 np0005537197 kernel: PCI: Using configuration type 1 for extended access
Nov 26 17:15:19 np0005537197 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 26 17:15:19 np0005537197 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 26 17:15:19 np0005537197 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 26 17:15:19 np0005537197 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 26 17:15:19 np0005537197 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 26 17:15:19 np0005537197 kernel: Demotion targets for Node 0: null
Nov 26 17:15:19 np0005537197 kernel: cryptd: max_cpu_qlen set to 1000
Nov 26 17:15:19 np0005537197 kernel: ACPI: Added _OSI(Module Device)
Nov 26 17:15:19 np0005537197 kernel: ACPI: Added _OSI(Processor Device)
Nov 26 17:15:19 np0005537197 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 26 17:15:19 np0005537197 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 26 17:15:19 np0005537197 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 26 17:15:19 np0005537197 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 26 17:15:19 np0005537197 kernel: ACPI: Interpreter enabled
Nov 26 17:15:19 np0005537197 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 26 17:15:19 np0005537197 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 26 17:15:19 np0005537197 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 26 17:15:19 np0005537197 kernel: PCI: Using E820 reservations for host bridge windows
Nov 26 17:15:19 np0005537197 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 26 17:15:19 np0005537197 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 26 17:15:19 np0005537197 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [3] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [4] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [5] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [6] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [7] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [8] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [9] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [10] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [11] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [12] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [13] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [14] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [15] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [16] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [17] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [18] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [19] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [20] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [21] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [22] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [23] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [24] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [25] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [26] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [27] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [28] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [29] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [30] registered
Nov 26 17:15:19 np0005537197 kernel: acpiphp: Slot [31] registered
Nov 26 17:15:19 np0005537197 kernel: PCI host bridge to bus 0000:00
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 26 17:15:19 np0005537197 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 26 17:15:19 np0005537197 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 26 17:15:19 np0005537197 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 26 17:15:19 np0005537197 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 26 17:15:19 np0005537197 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 26 17:15:19 np0005537197 kernel: iommu: Default domain type: Translated
Nov 26 17:15:19 np0005537197 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 26 17:15:19 np0005537197 kernel: SCSI subsystem initialized
Nov 26 17:15:19 np0005537197 kernel: ACPI: bus type USB registered
Nov 26 17:15:19 np0005537197 kernel: usbcore: registered new interface driver usbfs
Nov 26 17:15:19 np0005537197 kernel: usbcore: registered new interface driver hub
Nov 26 17:15:19 np0005537197 kernel: usbcore: registered new device driver usb
Nov 26 17:15:19 np0005537197 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 26 17:15:19 np0005537197 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 26 17:15:19 np0005537197 kernel: PTP clock support registered
Nov 26 17:15:19 np0005537197 kernel: EDAC MC: Ver: 3.0.0
Nov 26 17:15:19 np0005537197 kernel: NetLabel: Initializing
Nov 26 17:15:19 np0005537197 kernel: NetLabel:  domain hash size = 128
Nov 26 17:15:19 np0005537197 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 26 17:15:19 np0005537197 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 26 17:15:19 np0005537197 kernel: PCI: Using ACPI for IRQ routing
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 26 17:15:19 np0005537197 kernel: vgaarb: loaded
Nov 26 17:15:19 np0005537197 kernel: clocksource: Switched to clocksource kvm-clock
Nov 26 17:15:19 np0005537197 kernel: VFS: Disk quotas dquot_6.6.0
Nov 26 17:15:19 np0005537197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 26 17:15:19 np0005537197 kernel: pnp: PnP ACPI init
Nov 26 17:15:19 np0005537197 kernel: pnp: PnP ACPI: found 5 devices
Nov 26 17:15:19 np0005537197 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 26 17:15:19 np0005537197 kernel: NET: Registered PF_INET protocol family
Nov 26 17:15:19 np0005537197 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 26 17:15:19 np0005537197 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 26 17:15:19 np0005537197 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 26 17:15:19 np0005537197 kernel: NET: Registered PF_XDP protocol family
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 26 17:15:19 np0005537197 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 26 17:15:19 np0005537197 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 26 17:15:19 np0005537197 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 78566 usecs
Nov 26 17:15:19 np0005537197 kernel: PCI: CLS 0 bytes, default 64
Nov 26 17:15:19 np0005537197 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 26 17:15:19 np0005537197 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 26 17:15:19 np0005537197 kernel: ACPI: bus type thunderbolt registered
Nov 26 17:15:19 np0005537197 kernel: Trying to unpack rootfs image as initramfs...
Nov 26 17:15:19 np0005537197 kernel: Initialise system trusted keyrings
Nov 26 17:15:19 np0005537197 kernel: Key type blacklist registered
Nov 26 17:15:19 np0005537197 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 26 17:15:19 np0005537197 kernel: zbud: loaded
Nov 26 17:15:19 np0005537197 kernel: integrity: Platform Keyring initialized
Nov 26 17:15:19 np0005537197 kernel: integrity: Machine keyring initialized
Nov 26 17:15:19 np0005537197 kernel: Freeing initrd memory: 85868K
Nov 26 17:15:19 np0005537197 kernel: NET: Registered PF_ALG protocol family
Nov 26 17:15:19 np0005537197 kernel: xor: automatically using best checksumming function   avx       
Nov 26 17:15:19 np0005537197 kernel: Key type asymmetric registered
Nov 26 17:15:19 np0005537197 kernel: Asymmetric key parser 'x509' registered
Nov 26 17:15:19 np0005537197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 26 17:15:19 np0005537197 kernel: io scheduler mq-deadline registered
Nov 26 17:15:19 np0005537197 kernel: io scheduler kyber registered
Nov 26 17:15:19 np0005537197 kernel: io scheduler bfq registered
Nov 26 17:15:19 np0005537197 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 26 17:15:19 np0005537197 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 26 17:15:19 np0005537197 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 26 17:15:19 np0005537197 kernel: ACPI: button: Power Button [PWRF]
Nov 26 17:15:19 np0005537197 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 26 17:15:19 np0005537197 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 26 17:15:19 np0005537197 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 26 17:15:19 np0005537197 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 26 17:15:19 np0005537197 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 26 17:15:19 np0005537197 kernel: Non-volatile memory driver v1.3
Nov 26 17:15:19 np0005537197 kernel: rdac: device handler registered
Nov 26 17:15:19 np0005537197 kernel: hp_sw: device handler registered
Nov 26 17:15:19 np0005537197 kernel: emc: device handler registered
Nov 26 17:15:19 np0005537197 kernel: alua: device handler registered
Nov 26 17:15:19 np0005537197 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 26 17:15:19 np0005537197 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 26 17:15:19 np0005537197 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 26 17:15:19 np0005537197 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 26 17:15:19 np0005537197 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 26 17:15:19 np0005537197 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 26 17:15:19 np0005537197 kernel: usb usb1: Product: UHCI Host Controller
Nov 26 17:15:19 np0005537197 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 26 17:15:19 np0005537197 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 26 17:15:19 np0005537197 kernel: hub 1-0:1.0: USB hub found
Nov 26 17:15:19 np0005537197 kernel: hub 1-0:1.0: 2 ports detected
Nov 26 17:15:19 np0005537197 kernel: usbcore: registered new interface driver usbserial_generic
Nov 26 17:15:19 np0005537197 kernel: usbserial: USB Serial support registered for generic
Nov 26 17:15:19 np0005537197 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 26 17:15:19 np0005537197 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 26 17:15:19 np0005537197 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 26 17:15:19 np0005537197 kernel: mousedev: PS/2 mouse device common for all mice
Nov 26 17:15:19 np0005537197 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 26 17:15:19 np0005537197 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 26 17:15:19 np0005537197 kernel: rtc_cmos 00:04: registered as rtc0
Nov 26 17:15:19 np0005537197 kernel: rtc_cmos 00:04: setting system clock to 2025-11-26T22:15:18 UTC (1764195318)
Nov 26 17:15:19 np0005537197 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 26 17:15:19 np0005537197 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 26 17:15:19 np0005537197 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 26 17:15:19 np0005537197 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 26 17:15:19 np0005537197 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 26 17:15:19 np0005537197 kernel: usbcore: registered new interface driver usbhid
Nov 26 17:15:19 np0005537197 kernel: usbhid: USB HID core driver
Nov 26 17:15:19 np0005537197 kernel: drop_monitor: Initializing network drop monitor service
Nov 26 17:15:19 np0005537197 kernel: Initializing XFRM netlink socket
Nov 26 17:15:19 np0005537197 kernel: NET: Registered PF_INET6 protocol family
Nov 26 17:15:19 np0005537197 kernel: Segment Routing with IPv6
Nov 26 17:15:19 np0005537197 kernel: NET: Registered PF_PACKET protocol family
Nov 26 17:15:19 np0005537197 kernel: mpls_gso: MPLS GSO support
Nov 26 17:15:19 np0005537197 kernel: IPI shorthand broadcast: enabled
Nov 26 17:15:19 np0005537197 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 26 17:15:19 np0005537197 kernel: AES CTR mode by8 optimization enabled
Nov 26 17:15:19 np0005537197 kernel: sched_clock: Marking stable (1290010301, 162529086)->(1570322511, -117783124)
Nov 26 17:15:19 np0005537197 kernel: registered taskstats version 1
Nov 26 17:15:19 np0005537197 kernel: Loading compiled-in X.509 certificates
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 26 17:15:19 np0005537197 kernel: Demotion targets for Node 0: null
Nov 26 17:15:19 np0005537197 kernel: page_owner is disabled
Nov 26 17:15:19 np0005537197 kernel: Key type .fscrypt registered
Nov 26 17:15:19 np0005537197 kernel: Key type fscrypt-provisioning registered
Nov 26 17:15:19 np0005537197 kernel: Key type big_key registered
Nov 26 17:15:19 np0005537197 kernel: Key type encrypted registered
Nov 26 17:15:19 np0005537197 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 26 17:15:19 np0005537197 kernel: Loading compiled-in module X.509 certificates
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 26 17:15:19 np0005537197 kernel: ima: Allocated hash algorithm: sha256
Nov 26 17:15:19 np0005537197 kernel: ima: No architecture policies found
Nov 26 17:15:19 np0005537197 kernel: evm: Initialising EVM extended attributes:
Nov 26 17:15:19 np0005537197 kernel: evm: security.selinux
Nov 26 17:15:19 np0005537197 kernel: evm: security.SMACK64 (disabled)
Nov 26 17:15:19 np0005537197 kernel: evm: security.SMACK64EXEC (disabled)
Nov 26 17:15:19 np0005537197 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 26 17:15:19 np0005537197 kernel: evm: security.SMACK64MMAP (disabled)
Nov 26 17:15:19 np0005537197 kernel: evm: security.apparmor (disabled)
Nov 26 17:15:19 np0005537197 kernel: evm: security.ima
Nov 26 17:15:19 np0005537197 kernel: evm: security.capability
Nov 26 17:15:19 np0005537197 kernel: evm: HMAC attrs: 0x1
Nov 26 17:15:19 np0005537197 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 26 17:15:19 np0005537197 kernel: Running certificate verification RSA selftest
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 26 17:15:19 np0005537197 kernel: Running certificate verification ECDSA selftest
Nov 26 17:15:19 np0005537197 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 26 17:15:19 np0005537197 kernel: clk: Disabling unused clocks
Nov 26 17:15:19 np0005537197 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 26 17:15:19 np0005537197 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 26 17:15:19 np0005537197 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 26 17:15:19 np0005537197 kernel: usb 1-1: Manufacturer: QEMU
Nov 26 17:15:19 np0005537197 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 26 17:15:19 np0005537197 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 26 17:15:19 np0005537197 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 26 17:15:19 np0005537197 kernel: Freeing unused decrypted memory: 2028K
Nov 26 17:15:19 np0005537197 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 26 17:15:19 np0005537197 kernel: Write protecting the kernel read-only data: 30720k
Nov 26 17:15:19 np0005537197 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 26 17:15:19 np0005537197 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 26 17:15:19 np0005537197 kernel: Run /init as init process
Nov 26 17:15:19 np0005537197 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 17:15:19 np0005537197 systemd: Detected virtualization kvm.
Nov 26 17:15:19 np0005537197 systemd: Detected architecture x86-64.
Nov 26 17:15:19 np0005537197 systemd: Running in initrd.
Nov 26 17:15:19 np0005537197 systemd: No hostname configured, using default hostname.
Nov 26 17:15:19 np0005537197 systemd: Hostname set to <localhost>.
Nov 26 17:15:19 np0005537197 systemd: Initializing machine ID from VM UUID.
Nov 26 17:15:19 np0005537197 systemd: Queued start job for default target Initrd Default Target.
Nov 26 17:15:19 np0005537197 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 17:15:19 np0005537197 systemd: Reached target Local Encrypted Volumes.
Nov 26 17:15:19 np0005537197 systemd: Reached target Initrd /usr File System.
Nov 26 17:15:19 np0005537197 systemd: Reached target Local File Systems.
Nov 26 17:15:19 np0005537197 systemd: Reached target Path Units.
Nov 26 17:15:19 np0005537197 systemd: Reached target Slice Units.
Nov 26 17:15:19 np0005537197 systemd: Reached target Swaps.
Nov 26 17:15:19 np0005537197 systemd: Reached target Timer Units.
Nov 26 17:15:19 np0005537197 systemd: Listening on D-Bus System Message Bus Socket.
Nov 26 17:15:19 np0005537197 systemd: Listening on Journal Socket (/dev/log).
Nov 26 17:15:19 np0005537197 systemd: Listening on Journal Socket.
Nov 26 17:15:19 np0005537197 systemd: Listening on udev Control Socket.
Nov 26 17:15:19 np0005537197 systemd: Listening on udev Kernel Socket.
Nov 26 17:15:19 np0005537197 systemd: Reached target Socket Units.
Nov 26 17:15:19 np0005537197 systemd: Starting Create List of Static Device Nodes...
Nov 26 17:15:19 np0005537197 systemd: Starting Journal Service...
Nov 26 17:15:19 np0005537197 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 17:15:19 np0005537197 systemd: Starting Apply Kernel Variables...
Nov 26 17:15:19 np0005537197 systemd: Starting Create System Users...
Nov 26 17:15:19 np0005537197 systemd: Starting Setup Virtual Console...
Nov 26 17:15:19 np0005537197 systemd: Finished Create List of Static Device Nodes.
Nov 26 17:15:19 np0005537197 systemd: Finished Apply Kernel Variables.
Nov 26 17:15:19 np0005537197 systemd: Finished Create System Users.
Nov 26 17:15:19 np0005537197 systemd-journald[305]: Journal started
Nov 26 17:15:19 np0005537197 systemd-journald[305]: Runtime Journal (/run/log/journal/d7e69efcd84d42248bbd5fd303612f05) is 8.0M, max 153.6M, 145.6M free.
Nov 26 17:15:19 np0005537197 systemd-sysusers[308]: Creating group 'users' with GID 100.
Nov 26 17:15:19 np0005537197 systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Nov 26 17:15:19 np0005537197 systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 26 17:15:19 np0005537197 systemd: Started Journal Service.
Nov 26 17:15:19 np0005537197 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 17:15:19 np0005537197 systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 17:15:19 np0005537197 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 17:15:19 np0005537197 systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 17:15:19 np0005537197 systemd[1]: Finished Setup Virtual Console.
Nov 26 17:15:19 np0005537197 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 26 17:15:19 np0005537197 systemd[1]: Starting dracut cmdline hook...
Nov 26 17:15:19 np0005537197 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 26 17:15:19 np0005537197 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 26 17:15:19 np0005537197 systemd[1]: Finished dracut cmdline hook.
Nov 26 17:15:19 np0005537197 systemd[1]: Starting dracut pre-udev hook...
Nov 26 17:15:19 np0005537197 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 26 17:15:19 np0005537197 kernel: device-mapper: uevent: version 1.0.3
Nov 26 17:15:19 np0005537197 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 26 17:15:19 np0005537197 kernel: RPC: Registered named UNIX socket transport module.
Nov 26 17:15:19 np0005537197 kernel: RPC: Registered udp transport module.
Nov 26 17:15:19 np0005537197 kernel: RPC: Registered tcp transport module.
Nov 26 17:15:19 np0005537197 kernel: RPC: Registered tcp-with-tls transport module.
Nov 26 17:15:19 np0005537197 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 26 17:15:20 np0005537197 rpc.statd[443]: Version 2.5.4 starting
Nov 26 17:15:20 np0005537197 rpc.statd[443]: Initializing NSM state
Nov 26 17:15:20 np0005537197 rpc.idmapd[448]: Setting log level to 0
Nov 26 17:15:20 np0005537197 systemd[1]: Finished dracut pre-udev hook.
Nov 26 17:15:20 np0005537197 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 17:15:20 np0005537197 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 17:15:20 np0005537197 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 17:15:20 np0005537197 systemd[1]: Starting dracut pre-trigger hook...
Nov 26 17:15:20 np0005537197 systemd[1]: Finished dracut pre-trigger hook.
Nov 26 17:15:20 np0005537197 systemd[1]: Starting Coldplug All udev Devices...
Nov 26 17:15:20 np0005537197 systemd[1]: Created slice Slice /system/modprobe.
Nov 26 17:15:20 np0005537197 systemd[1]: Starting Load Kernel Module configfs...
Nov 26 17:15:20 np0005537197 systemd[1]: Finished Coldplug All udev Devices.
Nov 26 17:15:20 np0005537197 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 17:15:20 np0005537197 systemd[1]: Finished Load Kernel Module configfs.
Nov 26 17:15:20 np0005537197 systemd[1]: Mounting Kernel Configuration File System...
Nov 26 17:15:20 np0005537197 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 17:15:20 np0005537197 systemd[1]: Reached target Network.
Nov 26 17:15:20 np0005537197 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 26 17:15:20 np0005537197 systemd[1]: Starting dracut initqueue hook...
Nov 26 17:15:20 np0005537197 systemd[1]: Mounted Kernel Configuration File System.
Nov 26 17:15:20 np0005537197 systemd[1]: Reached target System Initialization.
Nov 26 17:15:20 np0005537197 systemd[1]: Reached target Basic System.
Nov 26 17:15:20 np0005537197 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 26 17:15:20 np0005537197 systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 17:15:20 np0005537197 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 26 17:15:20 np0005537197 kernel: vda: vda1
Nov 26 17:15:20 np0005537197 kernel: scsi host0: ata_piix
Nov 26 17:15:20 np0005537197 kernel: scsi host1: ata_piix
Nov 26 17:15:20 np0005537197 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 26 17:15:20 np0005537197 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 26 17:15:20 np0005537197 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 17:15:20 np0005537197 systemd[1]: Reached target Initrd Root Device.
Nov 26 17:15:20 np0005537197 kernel: ata1: found unknown device (class 0)
Nov 26 17:15:20 np0005537197 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 26 17:15:20 np0005537197 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 26 17:15:20 np0005537197 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 26 17:15:20 np0005537197 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 26 17:15:20 np0005537197 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 26 17:15:20 np0005537197 systemd[1]: Finished dracut initqueue hook.
Nov 26 17:15:20 np0005537197 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 17:15:20 np0005537197 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 26 17:15:20 np0005537197 systemd[1]: Reached target Remote File Systems.
Nov 26 17:15:20 np0005537197 systemd[1]: Starting dracut pre-mount hook...
Nov 26 17:15:20 np0005537197 systemd[1]: Finished dracut pre-mount hook.
Nov 26 17:15:20 np0005537197 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 26 17:15:21 np0005537197 systemd-fsck[557]: /usr/sbin/fsck.xfs: XFS file system.
Nov 26 17:15:21 np0005537197 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 26 17:15:21 np0005537197 systemd[1]: Mounting /sysroot...
Nov 26 17:15:21 np0005537197 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 26 17:15:21 np0005537197 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 26 17:15:21 np0005537197 kernel: XFS (vda1): Ending clean mount
Nov 26 17:15:21 np0005537197 systemd[1]: Mounted /sysroot.
Nov 26 17:15:21 np0005537197 systemd[1]: Reached target Initrd Root File System.
Nov 26 17:15:21 np0005537197 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 26 17:15:21 np0005537197 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 26 17:15:21 np0005537197 systemd[1]: Reached target Initrd File Systems.
Nov 26 17:15:21 np0005537197 systemd[1]: Reached target Initrd Default Target.
Nov 26 17:15:21 np0005537197 systemd[1]: Starting dracut mount hook...
Nov 26 17:15:21 np0005537197 systemd[1]: Finished dracut mount hook.
Nov 26 17:15:21 np0005537197 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 26 17:15:21 np0005537197 rpc.idmapd[448]: exiting on signal 15
Nov 26 17:15:21 np0005537197 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 26 17:15:21 np0005537197 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Network.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Timer Units.
Nov 26 17:15:21 np0005537197 systemd[1]: dbus.socket: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 26 17:15:21 np0005537197 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Initrd Default Target.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Basic System.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Initrd Root Device.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Initrd /usr File System.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Path Units.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Remote File Systems.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Slice Units.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Socket Units.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target System Initialization.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Local File Systems.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Swaps.
Nov 26 17:15:21 np0005537197 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped dracut mount hook.
Nov 26 17:15:21 np0005537197 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped dracut pre-mount hook.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 26 17:15:21 np0005537197 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped dracut initqueue hook.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Apply Kernel Variables.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Coldplug All udev Devices.
Nov 26 17:15:21 np0005537197 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped dracut pre-trigger hook.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Setup Virtual Console.
Nov 26 17:15:21 np0005537197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-udevd.service: Consumed 1.151s CPU time.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Closed udev Control Socket.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Closed udev Kernel Socket.
Nov 26 17:15:21 np0005537197 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped dracut pre-udev hook.
Nov 26 17:15:21 np0005537197 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped dracut cmdline hook.
Nov 26 17:15:21 np0005537197 systemd[1]: Starting Cleanup udev Database...
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 26 17:15:21 np0005537197 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 26 17:15:21 np0005537197 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Stopped Create System Users.
Nov 26 17:15:21 np0005537197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 26 17:15:21 np0005537197 systemd[1]: Finished Cleanup udev Database.
Nov 26 17:15:21 np0005537197 systemd[1]: Reached target Switch Root.
Nov 26 17:15:21 np0005537197 systemd[1]: Starting Switch Root...
Nov 26 17:15:21 np0005537197 systemd[1]: Switching root.
Nov 26 17:15:22 np0005537197 systemd-journald[305]: Journal stopped
Nov 26 17:15:23 np0005537197 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 26 17:15:23 np0005537197 kernel: audit: type=1404 audit(1764195322.192:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 26 17:15:23 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:15:23 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:15:23 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:15:23 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:15:23 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:15:23 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:15:23 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:15:23 np0005537197 kernel: audit: type=1403 audit(1764195322.368:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 26 17:15:23 np0005537197 systemd: Successfully loaded SELinux policy in 183.147ms.
Nov 26 17:15:23 np0005537197 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.863ms.
Nov 26 17:15:23 np0005537197 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 26 17:15:23 np0005537197 systemd: Detected virtualization kvm.
Nov 26 17:15:23 np0005537197 systemd: Detected architecture x86-64.
Nov 26 17:15:23 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:15:23 np0005537197 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd: Stopped Switch Root.
Nov 26 17:15:23 np0005537197 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 26 17:15:23 np0005537197 systemd: Created slice Slice /system/getty.
Nov 26 17:15:23 np0005537197 systemd: Created slice Slice /system/serial-getty.
Nov 26 17:15:23 np0005537197 systemd: Created slice Slice /system/sshd-keygen.
Nov 26 17:15:23 np0005537197 systemd: Created slice User and Session Slice.
Nov 26 17:15:23 np0005537197 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 26 17:15:23 np0005537197 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 26 17:15:23 np0005537197 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 26 17:15:23 np0005537197 systemd: Reached target Local Encrypted Volumes.
Nov 26 17:15:23 np0005537197 systemd: Stopped target Switch Root.
Nov 26 17:15:23 np0005537197 systemd: Stopped target Initrd File Systems.
Nov 26 17:15:23 np0005537197 systemd: Stopped target Initrd Root File System.
Nov 26 17:15:23 np0005537197 systemd: Reached target Local Integrity Protected Volumes.
Nov 26 17:15:23 np0005537197 systemd: Reached target Path Units.
Nov 26 17:15:23 np0005537197 systemd: Reached target rpc_pipefs.target.
Nov 26 17:15:23 np0005537197 systemd: Reached target Slice Units.
Nov 26 17:15:23 np0005537197 systemd: Reached target Swaps.
Nov 26 17:15:23 np0005537197 systemd: Reached target Local Verity Protected Volumes.
Nov 26 17:15:23 np0005537197 systemd: Listening on RPCbind Server Activation Socket.
Nov 26 17:15:23 np0005537197 systemd: Reached target RPC Port Mapper.
Nov 26 17:15:23 np0005537197 systemd: Listening on Process Core Dump Socket.
Nov 26 17:15:23 np0005537197 systemd: Listening on initctl Compatibility Named Pipe.
Nov 26 17:15:23 np0005537197 systemd: Listening on udev Control Socket.
Nov 26 17:15:23 np0005537197 systemd: Listening on udev Kernel Socket.
Nov 26 17:15:23 np0005537197 systemd: Mounting Huge Pages File System...
Nov 26 17:15:23 np0005537197 systemd: Mounting POSIX Message Queue File System...
Nov 26 17:15:23 np0005537197 systemd: Mounting Kernel Debug File System...
Nov 26 17:15:23 np0005537197 systemd: Mounting Kernel Trace File System...
Nov 26 17:15:23 np0005537197 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 17:15:23 np0005537197 systemd: Starting Create List of Static Device Nodes...
Nov 26 17:15:23 np0005537197 systemd: Starting Load Kernel Module configfs...
Nov 26 17:15:23 np0005537197 systemd: Starting Load Kernel Module drm...
Nov 26 17:15:23 np0005537197 systemd: Starting Load Kernel Module efi_pstore...
Nov 26 17:15:23 np0005537197 systemd: Starting Load Kernel Module fuse...
Nov 26 17:15:23 np0005537197 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 26 17:15:23 np0005537197 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd: Stopped File System Check on Root Device.
Nov 26 17:15:23 np0005537197 systemd: Stopped Journal Service.
Nov 26 17:15:23 np0005537197 systemd: Starting Journal Service...
Nov 26 17:15:23 np0005537197 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 26 17:15:23 np0005537197 systemd: Starting Generate network units from Kernel command line...
Nov 26 17:15:23 np0005537197 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 17:15:23 np0005537197 systemd: Starting Remount Root and Kernel File Systems...
Nov 26 17:15:23 np0005537197 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 26 17:15:23 np0005537197 systemd: Starting Apply Kernel Variables...
Nov 26 17:15:23 np0005537197 systemd: Starting Coldplug All udev Devices...
Nov 26 17:15:23 np0005537197 systemd: Mounted Huge Pages File System.
Nov 26 17:15:23 np0005537197 systemd: Mounted POSIX Message Queue File System.
Nov 26 17:15:23 np0005537197 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 26 17:15:23 np0005537197 systemd: Mounted Kernel Debug File System.
Nov 26 17:15:23 np0005537197 systemd: Mounted Kernel Trace File System.
Nov 26 17:15:23 np0005537197 systemd-journald[679]: Journal started
Nov 26 17:15:23 np0005537197 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 17:15:23 np0005537197 systemd[1]: Queued start job for default target Multi-User System.
Nov 26 17:15:23 np0005537197 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd: Finished Create List of Static Device Nodes.
Nov 26 17:15:23 np0005537197 kernel: ACPI: bus type drm_connector registered
Nov 26 17:15:23 np0005537197 systemd: Started Journal Service.
Nov 26 17:15:23 np0005537197 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Load Kernel Module configfs.
Nov 26 17:15:23 np0005537197 kernel: fuse: init (API version 7.37)
Nov 26 17:15:23 np0005537197 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Load Kernel Module drm.
Nov 26 17:15:23 np0005537197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 26 17:15:23 np0005537197 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Load Kernel Module fuse.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Generate network units from Kernel command line.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Apply Kernel Variables.
Nov 26 17:15:23 np0005537197 systemd[1]: Mounting FUSE Control File System...
Nov 26 17:15:23 np0005537197 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Rebuild Hardware Database...
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 26 17:15:23 np0005537197 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Load/Save OS Random Seed...
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Create System Users...
Nov 26 17:15:23 np0005537197 systemd[1]: Mounted FUSE Control File System.
Nov 26 17:15:23 np0005537197 systemd-journald[679]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 26 17:15:23 np0005537197 systemd-journald[679]: Received client request to flush runtime journal.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Load/Save OS Random Seed.
Nov 26 17:15:23 np0005537197 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Coldplug All udev Devices.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Create System Users.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 26 17:15:23 np0005537197 systemd[1]: Reached target Preparation for Local File Systems.
Nov 26 17:15:23 np0005537197 systemd[1]: Reached target Local File Systems.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 26 17:15:23 np0005537197 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 26 17:15:23 np0005537197 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 26 17:15:23 np0005537197 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Automatic Boot Loader Update...
Nov 26 17:15:23 np0005537197 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Create Volatile Files and Directories...
Nov 26 17:15:23 np0005537197 bootctl[697]: Couldn't find EFI system partition, skipping.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Automatic Boot Loader Update.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Create Volatile Files and Directories.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Security Auditing Service...
Nov 26 17:15:23 np0005537197 systemd[1]: Starting RPC Bind...
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Rebuild Journal Catalog...
Nov 26 17:15:23 np0005537197 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 26 17:15:23 np0005537197 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Rebuild Journal Catalog.
Nov 26 17:15:23 np0005537197 systemd[1]: Started RPC Bind.
Nov 26 17:15:23 np0005537197 augenrules[708]: /sbin/augenrules: No change
Nov 26 17:15:23 np0005537197 augenrules[723]: No rules
Nov 26 17:15:23 np0005537197 augenrules[723]: enabled 1
Nov 26 17:15:23 np0005537197 augenrules[723]: failure 1
Nov 26 17:15:23 np0005537197 augenrules[723]: pid 702
Nov 26 17:15:23 np0005537197 augenrules[723]: rate_limit 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_limit 8192
Nov 26 17:15:23 np0005537197 augenrules[723]: lost 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_wait_time 60000
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_wait_time_actual 0
Nov 26 17:15:23 np0005537197 augenrules[723]: enabled 1
Nov 26 17:15:23 np0005537197 augenrules[723]: failure 1
Nov 26 17:15:23 np0005537197 augenrules[723]: pid 702
Nov 26 17:15:23 np0005537197 augenrules[723]: rate_limit 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_limit 8192
Nov 26 17:15:23 np0005537197 augenrules[723]: lost 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_wait_time 60000
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_wait_time_actual 0
Nov 26 17:15:23 np0005537197 augenrules[723]: enabled 1
Nov 26 17:15:23 np0005537197 augenrules[723]: failure 1
Nov 26 17:15:23 np0005537197 augenrules[723]: pid 702
Nov 26 17:15:23 np0005537197 augenrules[723]: rate_limit 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_limit 8192
Nov 26 17:15:23 np0005537197 augenrules[723]: lost 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog 0
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_wait_time 60000
Nov 26 17:15:23 np0005537197 augenrules[723]: backlog_wait_time_actual 0
Nov 26 17:15:23 np0005537197 systemd[1]: Started Security Auditing Service.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Rebuild Hardware Database.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 26 17:15:23 np0005537197 systemd-udevd[731]: Using default interface naming scheme 'rhel-9.0'.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Update is Completed...
Nov 26 17:15:23 np0005537197 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 26 17:15:23 np0005537197 systemd[1]: Starting Load Kernel Module configfs...
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Update is Completed.
Nov 26 17:15:23 np0005537197 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 26 17:15:23 np0005537197 systemd[1]: Finished Load Kernel Module configfs.
Nov 26 17:15:23 np0005537197 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 26 17:15:23 np0005537197 systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 17:15:24 np0005537197 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 26 17:15:24 np0005537197 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 26 17:15:24 np0005537197 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 26 17:15:24 np0005537197 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 26 17:15:24 np0005537197 systemd[1]: Reached target System Initialization.
Nov 26 17:15:24 np0005537197 systemd[1]: Started dnf makecache --timer.
Nov 26 17:15:24 np0005537197 systemd[1]: Started Daily rotation of log files.
Nov 26 17:15:24 np0005537197 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 26 17:15:24 np0005537197 systemd[1]: Reached target Timer Units.
Nov 26 17:15:24 np0005537197 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 26 17:15:24 np0005537197 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 26 17:15:24 np0005537197 systemd[1]: Reached target Socket Units.
Nov 26 17:15:24 np0005537197 systemd[1]: Starting D-Bus System Message Bus...
Nov 26 17:15:24 np0005537197 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 17:15:24 np0005537197 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 26 17:15:24 np0005537197 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 26 17:15:24 np0005537197 kernel: Console: switching to colour dummy device 80x25
Nov 26 17:15:24 np0005537197 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 26 17:15:24 np0005537197 kernel: [drm] features: -context_init
Nov 26 17:15:24 np0005537197 kernel: [drm] number of scanouts: 1
Nov 26 17:15:24 np0005537197 kernel: [drm] number of cap sets: 0
Nov 26 17:15:24 np0005537197 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 26 17:15:24 np0005537197 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 26 17:15:24 np0005537197 kernel: Console: switching to colour frame buffer device 128x48
Nov 26 17:15:24 np0005537197 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 26 17:15:24 np0005537197 kernel: kvm_amd: TSC scaling supported
Nov 26 17:15:24 np0005537197 kernel: kvm_amd: Nested Virtualization enabled
Nov 26 17:15:24 np0005537197 kernel: kvm_amd: Nested Paging enabled
Nov 26 17:15:24 np0005537197 kernel: kvm_amd: LBR virtualization supported
Nov 26 17:15:24 np0005537197 systemd[1]: Started D-Bus System Message Bus.
Nov 26 17:15:24 np0005537197 systemd[1]: Reached target Basic System.
Nov 26 17:15:24 np0005537197 dbus-broker-lau[785]: Ready
Nov 26 17:15:24 np0005537197 systemd[1]: Starting NTP client/server...
Nov 26 17:15:24 np0005537197 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 26 17:15:24 np0005537197 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 26 17:15:24 np0005537197 systemd[1]: Starting IPv4 firewall with iptables...
Nov 26 17:15:24 np0005537197 systemd[1]: Started irqbalance daemon.
Nov 26 17:15:24 np0005537197 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 26 17:15:24 np0005537197 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 17:15:24 np0005537197 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 17:15:24 np0005537197 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 17:15:24 np0005537197 systemd[1]: Reached target sshd-keygen.target.
Nov 26 17:15:24 np0005537197 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 26 17:15:24 np0005537197 systemd[1]: Reached target User and Group Name Lookups.
Nov 26 17:15:24 np0005537197 systemd[1]: Starting User Login Management...
Nov 26 17:15:24 np0005537197 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 26 17:15:24 np0005537197 chronyd[831]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 17:15:24 np0005537197 chronyd[831]: Loaded 0 symmetric keys
Nov 26 17:15:24 np0005537197 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 26 17:15:24 np0005537197 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 26 17:15:24 np0005537197 chronyd[831]: Using right/UTC timezone to obtain leap second data
Nov 26 17:15:24 np0005537197 chronyd[831]: Loaded seccomp filter (level 2)
Nov 26 17:15:24 np0005537197 systemd[1]: Started NTP client/server.
Nov 26 17:15:24 np0005537197 systemd-logind[819]: New seat seat0.
Nov 26 17:15:24 np0005537197 systemd-logind[819]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 17:15:24 np0005537197 systemd-logind[819]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 17:15:24 np0005537197 systemd[1]: Started User Login Management.
Nov 26 17:15:24 np0005537197 iptables.init[801]: iptables: Applying firewall rules: [  OK  ]
Nov 26 17:15:24 np0005537197 systemd[1]: Finished IPv4 firewall with iptables.
Nov 26 17:15:24 np0005537197 cloud-init[841]: Cloud-init v. 24.4-7.el9 running 'init-local' at Wed, 26 Nov 2025 22:15:24 +0000. Up 7.66 seconds.
Nov 26 17:15:25 np0005537197 systemd[1]: run-cloud\x2dinit-tmp-tmpfkoml_7n.mount: Deactivated successfully.
Nov 26 17:15:25 np0005537197 systemd[1]: Starting Hostname Service...
Nov 26 17:15:25 np0005537197 systemd[1]: Started Hostname Service.
Nov 26 17:15:25 np0005537197 systemd-hostnamed[855]: Hostname set to <np0005537197.novalocal> (static)
Nov 26 17:15:25 np0005537197 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 26 17:15:25 np0005537197 systemd[1]: Reached target Preparation for Network.
Nov 26 17:15:25 np0005537197 systemd[1]: Starting Network Manager...
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5124] NetworkManager (version 1.54.1-1.el9) is starting... (boot:3c62c5e9-a0a5-407a-900d-a0335b249ae4)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5129] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5253] manager[0x55bc40cd6080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5304] hostname: hostname: using hostnamed
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5305] hostname: static hostname changed from (none) to "np0005537197.novalocal"
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5308] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5442] manager[0x55bc40cd6080]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5443] manager[0x55bc40cd6080]: rfkill: WWAN hardware radio set enabled
Nov 26 17:15:25 np0005537197 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5525] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5526] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5527] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5527] manager: Networking is enabled by state file
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5529] settings: Loaded settings plugin: keyfile (internal)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5558] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5583] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5604] dhcp: init: Using DHCP client 'internal'
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5606] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5616] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5628] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5634] device (lo): Activation: starting connection 'lo' (ea7e87d8-c88d-44a1-b899-586bba8705c2)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5641] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5643] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5670] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5673] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5675] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5677] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5679] device (eth0): carrier: link connected
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5682] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5687] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5692] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5697] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5698] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5700] manager: NetworkManager state is now CONNECTING
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5702] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5707] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5709] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5749] dhcp4 (eth0): state changed new lease, address=38.102.83.156
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5756] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 17:15:25 np0005537197 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5772] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:15:25 np0005537197 systemd[1]: Started Network Manager.
Nov 26 17:15:25 np0005537197 systemd[1]: Reached target Network.
Nov 26 17:15:25 np0005537197 systemd[1]: Starting Network Manager Wait Online...
Nov 26 17:15:25 np0005537197 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 26 17:15:25 np0005537197 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5949] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5950] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5951] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5957] device (lo): Activation: successful, device activated.
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5962] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5964] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5966] device (eth0): Activation: successful, device activated.
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5972] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 17:15:25 np0005537197 NetworkManager[859]: <info>  [1764195325.5974] manager: startup complete
Nov 26 17:15:25 np0005537197 systemd[1]: Finished Network Manager Wait Online.
Nov 26 17:15:25 np0005537197 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 26 17:15:25 np0005537197 systemd[1]: Starting Cloud-init: Network Stage...
Nov 26 17:15:25 np0005537197 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 26 17:15:25 np0005537197 systemd[1]: Reached target NFS client services.
Nov 26 17:15:25 np0005537197 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 26 17:15:25 np0005537197 systemd[1]: Reached target Remote File Systems.
Nov 26 17:15:25 np0005537197 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 26 17:15:25 np0005537197 cloud-init[922]: Cloud-init v. 24.4-7.el9 running 'init' at Wed, 26 Nov 2025 22:15:25 +0000. Up 8.65 seconds.
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.156         | 255.255.255.0 | global | fa:16:3e:96:7b:50 |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe96:7b50/64 |       .       |  link  | fa:16:3e:96:7b:50 |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 26 17:15:26 np0005537197 cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 26 17:15:27 np0005537197 cloud-init[922]: Generating public/private rsa key pair.
Nov 26 17:15:27 np0005537197 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 26 17:15:27 np0005537197 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 26 17:15:27 np0005537197 cloud-init[922]: The key fingerprint is:
Nov 26 17:15:27 np0005537197 cloud-init[922]: SHA256:QoCzKXOSiPDZFTP50epkYOLZHjaIlDQ2IzuzupHIXdY root@np0005537197.novalocal
Nov 26 17:15:27 np0005537197 cloud-init[922]: The key's randomart image is:
Nov 26 17:15:27 np0005537197 cloud-init[922]: +---[RSA 3072]----+
Nov 26 17:15:27 np0005537197 cloud-init[922]: |..*o. +o .       |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |.+=+..=o. .      |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |B+ O B.o o       |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |B+B =.* =        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |.=   =.ES        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |+.. o ...        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |=. .             |
Nov 26 17:15:27 np0005537197 cloud-init[922]: | o               |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |.                |
Nov 26 17:15:27 np0005537197 cloud-init[922]: +----[SHA256]-----+
Nov 26 17:15:27 np0005537197 cloud-init[922]: Generating public/private ecdsa key pair.
Nov 26 17:15:27 np0005537197 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 26 17:15:27 np0005537197 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 26 17:15:27 np0005537197 cloud-init[922]: The key fingerprint is:
Nov 26 17:15:27 np0005537197 cloud-init[922]: SHA256:MUWzn8+w4io5G/qBXNFu5ARc6OTL7KIbuU+Vt8G4ztg root@np0005537197.novalocal
Nov 26 17:15:27 np0005537197 cloud-init[922]: The key's randomart image is:
Nov 26 17:15:27 np0005537197 cloud-init[922]: +---[ECDSA 256]---+
Nov 26 17:15:27 np0005537197 cloud-init[922]: |     ..o..+      |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |      +o . o     |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |     +. * .      |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |      oX o . .   |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |     o=.S   +    |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |   o +++ o   =   |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |  o +.+.. . . o  |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |   +.*=o . .     |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |  +++oE+...      |
Nov 26 17:15:27 np0005537197 cloud-init[922]: +----[SHA256]-----+
Nov 26 17:15:27 np0005537197 cloud-init[922]: Generating public/private ed25519 key pair.
Nov 26 17:15:27 np0005537197 cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 26 17:15:27 np0005537197 cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 26 17:15:27 np0005537197 cloud-init[922]: The key fingerprint is:
Nov 26 17:15:27 np0005537197 cloud-init[922]: SHA256:OuCFENYsCGvekhmcItPN3ya3FOEBlAmhFIU3/u+E2aI root@np0005537197.novalocal
Nov 26 17:15:27 np0005537197 cloud-init[922]: The key's randomart image is:
Nov 26 17:15:27 np0005537197 cloud-init[922]: +--[ED25519 256]--+
Nov 26 17:15:27 np0005537197 cloud-init[922]: |o.=*++o+o        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |o=+== o. o       |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |==++o.  o        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |=.=..o . .       |
Nov 26 17:15:27 np0005537197 cloud-init[922]: | = .o.+ S        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |  .. o.X .       |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |    . *.+        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |     . +.        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: |    E  ..        |
Nov 26 17:15:27 np0005537197 cloud-init[922]: +----[SHA256]-----+
Nov 26 17:15:27 np0005537197 systemd[1]: Finished Cloud-init: Network Stage.
Nov 26 17:15:27 np0005537197 systemd[1]: Reached target Cloud-config availability.
Nov 26 17:15:27 np0005537197 systemd[1]: Reached target Network is Online.
Nov 26 17:15:27 np0005537197 systemd[1]: Starting Cloud-init: Config Stage...
Nov 26 17:15:27 np0005537197 systemd[1]: Starting Crash recovery kernel arming...
Nov 26 17:15:27 np0005537197 systemd[1]: Starting Notify NFS peers of a restart...
Nov 26 17:15:27 np0005537197 systemd[1]: Starting System Logging Service...
Nov 26 17:15:27 np0005537197 systemd[1]: Starting OpenSSH server daemon...
Nov 26 17:15:27 np0005537197 sm-notify[1004]: Version 2.5.4 starting
Nov 26 17:15:27 np0005537197 systemd[1]: Starting Permit User Sessions...
Nov 26 17:15:27 np0005537197 systemd[1]: Started Notify NFS peers of a restart.
Nov 26 17:15:27 np0005537197 systemd[1]: Finished Permit User Sessions.
Nov 26 17:15:27 np0005537197 systemd[1]: Started Command Scheduler.
Nov 26 17:15:27 np0005537197 systemd[1]: Started Getty on tty1.
Nov 26 17:15:27 np0005537197 systemd[1]: Started Serial Getty on ttyS0.
Nov 26 17:15:27 np0005537197 systemd[1]: Reached target Login Prompts.
Nov 26 17:15:27 np0005537197 systemd[1]: Started OpenSSH server daemon.
Nov 26 17:15:27 np0005537197 systemd[1]: Started System Logging Service.
Nov 26 17:15:27 np0005537197 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] start
Nov 26 17:15:27 np0005537197 rsyslogd[1005]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 26 17:15:27 np0005537197 systemd[1]: Reached target Multi-User System.
Nov 26 17:15:27 np0005537197 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 26 17:15:27 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 17:15:27 np0005537197 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 26 17:15:27 np0005537197 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 26 17:15:27 np0005537197 kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Nov 26 17:15:27 np0005537197 kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 26 17:15:27 np0005537197 cloud-init[1073]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Wed, 26 Nov 2025 22:15:27 +0000. Up 10.58 seconds.
Nov 26 17:15:28 np0005537197 systemd[1]: Finished Cloud-init: Config Stage.
Nov 26 17:15:28 np0005537197 systemd[1]: Starting Cloud-init: Final Stage...
Nov 26 17:15:28 np0005537197 cloud-init[1224]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Wed, 26 Nov 2025 22:15:28 +0000. Up 11.02 seconds.
Nov 26 17:15:28 np0005537197 cloud-init[1258]: #############################################################
Nov 26 17:15:28 np0005537197 cloud-init[1261]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 26 17:15:28 np0005537197 cloud-init[1268]: 256 SHA256:MUWzn8+w4io5G/qBXNFu5ARc6OTL7KIbuU+Vt8G4ztg root@np0005537197.novalocal (ECDSA)
Nov 26 17:15:28 np0005537197 cloud-init[1271]: 256 SHA256:OuCFENYsCGvekhmcItPN3ya3FOEBlAmhFIU3/u+E2aI root@np0005537197.novalocal (ED25519)
Nov 26 17:15:28 np0005537197 cloud-init[1275]: 3072 SHA256:QoCzKXOSiPDZFTP50epkYOLZHjaIlDQ2IzuzupHIXdY root@np0005537197.novalocal (RSA)
Nov 26 17:15:28 np0005537197 cloud-init[1276]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 26 17:15:28 np0005537197 cloud-init[1277]: #############################################################
Nov 26 17:15:28 np0005537197 dracut[1280]: dracut-057-102.git20250818.el9
Nov 26 17:15:28 np0005537197 cloud-init[1224]: Cloud-init v. 24.4-7.el9 finished at Wed, 26 Nov 2025 22:15:28 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 11.28 seconds
Nov 26 17:15:28 np0005537197 systemd[1]: Finished Cloud-init: Final Stage.
Nov 26 17:15:28 np0005537197 systemd[1]: Reached target Cloud-init target.
Nov 26 17:15:28 np0005537197 dracut[1284]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 17:15:29 np0005537197 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: memstrack is not available
Nov 26 17:15:30 np0005537197 dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 26 17:15:30 np0005537197 chronyd[831]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Nov 26 17:15:30 np0005537197 chronyd[831]: System clock TAI offset set to 37 seconds
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 26 17:15:30 np0005537197 dracut[1284]: memstrack is not available
Nov 26 17:15:30 np0005537197 dracut[1284]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 26 17:15:31 np0005537197 dracut[1284]: *** Including module: systemd ***
Nov 26 17:15:31 np0005537197 dracut[1284]: *** Including module: fips ***
Nov 26 17:15:32 np0005537197 dracut[1284]: *** Including module: systemd-initrd ***
Nov 26 17:15:32 np0005537197 dracut[1284]: *** Including module: i18n ***
Nov 26 17:15:32 np0005537197 dracut[1284]: *** Including module: drm ***
Nov 26 17:15:32 np0005537197 dracut[1284]: *** Including module: prefixdevname ***
Nov 26 17:15:33 np0005537197 dracut[1284]: *** Including module: kernel-modules ***
Nov 26 17:15:33 np0005537197 kernel: block vda: the capability attribute has been deprecated.
Nov 26 17:15:34 np0005537197 dracut[1284]: *** Including module: kernel-modules-extra ***
Nov 26 17:15:34 np0005537197 dracut[1284]: *** Including module: qemu ***
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 35 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 35 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 33 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 33 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 31 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 28 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 34 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 34 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 32 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 30 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 irqbalance[803]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 26 17:15:34 np0005537197 irqbalance[803]: IRQ 29 affinity is now unmanaged
Nov 26 17:15:34 np0005537197 dracut[1284]: *** Including module: fstab-sys ***
Nov 26 17:15:34 np0005537197 dracut[1284]: *** Including module: rootfs-block ***
Nov 26 17:15:34 np0005537197 dracut[1284]: *** Including module: terminfo ***
Nov 26 17:15:34 np0005537197 dracut[1284]: *** Including module: udev-rules ***
Nov 26 17:15:35 np0005537197 dracut[1284]: Skipping udev rule: 91-permissions.rules
Nov 26 17:15:35 np0005537197 dracut[1284]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 26 17:15:35 np0005537197 dracut[1284]: *** Including module: virtiofs ***
Nov 26 17:15:35 np0005537197 dracut[1284]: *** Including module: dracut-systemd ***
Nov 26 17:15:35 np0005537197 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 17:15:35 np0005537197 dracut[1284]: *** Including module: usrmount ***
Nov 26 17:15:35 np0005537197 dracut[1284]: *** Including module: base ***
Nov 26 17:15:36 np0005537197 dracut[1284]: *** Including module: fs-lib ***
Nov 26 17:15:36 np0005537197 dracut[1284]: *** Including module: kdumpbase ***
Nov 26 17:15:36 np0005537197 dracut[1284]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 26 17:15:36 np0005537197 dracut[1284]:  microcode_ctl module: mangling fw_dir
Nov 26 17:15:36 np0005537197 dracut[1284]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 26 17:15:36 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 26 17:15:36 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel" is ignored
Nov 26 17:15:36 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 26 17:15:37 np0005537197 dracut[1284]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 26 17:15:37 np0005537197 dracut[1284]: *** Including module: openssl ***
Nov 26 17:15:37 np0005537197 dracut[1284]: *** Including module: shutdown ***
Nov 26 17:15:37 np0005537197 dracut[1284]: *** Including module: squash ***
Nov 26 17:15:37 np0005537197 dracut[1284]: *** Including modules done ***
Nov 26 17:15:37 np0005537197 dracut[1284]: *** Installing kernel module dependencies ***
Nov 26 17:15:39 np0005537197 dracut[1284]: *** Installing kernel module dependencies done ***
Nov 26 17:15:39 np0005537197 dracut[1284]: *** Resolving executable dependencies ***
Nov 26 17:15:42 np0005537197 dracut[1284]: *** Resolving executable dependencies done ***
Nov 26 17:15:42 np0005537197 dracut[1284]: *** Generating early-microcode cpio image ***
Nov 26 17:15:42 np0005537197 dracut[1284]: *** Store current command line parameters ***
Nov 26 17:15:42 np0005537197 dracut[1284]: Stored kernel commandline:
Nov 26 17:15:42 np0005537197 dracut[1284]: No dracut internal kernel commandline stored in the initramfs
Nov 26 17:15:45 np0005537197 dracut[1284]: *** Install squash loader ***
Nov 26 17:15:46 np0005537197 dracut[1284]: *** Squashing the files inside the initramfs ***
Nov 26 17:15:47 np0005537197 dracut[1284]: *** Squashing the files inside the initramfs done ***
Nov 26 17:15:47 np0005537197 dracut[1284]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 26 17:15:47 np0005537197 dracut[1284]: *** Hardlinking files ***
Nov 26 17:15:47 np0005537197 dracut[1284]: *** Hardlinking files done ***
Nov 26 17:15:49 np0005537197 dracut[1284]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 26 17:15:51 np0005537197 kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Nov 26 17:15:51 np0005537197 kdumpctl[1018]: kdump: Starting kdump: [OK]
Nov 26 17:15:51 np0005537197 systemd[1]: Finished Crash recovery kernel arming.
Nov 26 17:15:51 np0005537197 systemd[1]: Startup finished in 1.717s (kernel) + 3.178s (initrd) + 29.010s (userspace) = 33.906s.
Nov 26 17:15:55 np0005537197 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 17:17:22 np0005537197 systemd[1]: Created slice User Slice of UID 1000.
Nov 26 17:17:22 np0005537197 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 26 17:17:22 np0005537197 systemd-logind[819]: New session 1 of user zuul.
Nov 26 17:17:22 np0005537197 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 26 17:17:22 np0005537197 systemd[1]: Starting User Manager for UID 1000...
Nov 26 17:17:22 np0005537197 systemd[4300]: Queued start job for default target Main User Target.
Nov 26 17:17:22 np0005537197 systemd[4300]: Created slice User Application Slice.
Nov 26 17:17:22 np0005537197 systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 26 17:17:22 np0005537197 systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 17:17:22 np0005537197 systemd[4300]: Reached target Paths.
Nov 26 17:17:22 np0005537197 systemd[4300]: Reached target Timers.
Nov 26 17:17:22 np0005537197 systemd[4300]: Starting D-Bus User Message Bus Socket...
Nov 26 17:17:22 np0005537197 systemd[4300]: Starting Create User's Volatile Files and Directories...
Nov 26 17:17:22 np0005537197 systemd[4300]: Listening on D-Bus User Message Bus Socket.
Nov 26 17:17:22 np0005537197 systemd[4300]: Reached target Sockets.
Nov 26 17:17:22 np0005537197 systemd[4300]: Finished Create User's Volatile Files and Directories.
Nov 26 17:17:22 np0005537197 systemd[4300]: Reached target Basic System.
Nov 26 17:17:22 np0005537197 systemd[4300]: Reached target Main User Target.
Nov 26 17:17:22 np0005537197 systemd[4300]: Startup finished in 169ms.
Nov 26 17:17:22 np0005537197 systemd[1]: Started User Manager for UID 1000.
Nov 26 17:17:22 np0005537197 systemd[1]: Started Session 1 of User zuul.
Nov 26 17:17:23 np0005537197 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:17:26 np0005537197 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:17:32 np0005537197 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:17:33 np0005537197 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 26 17:17:35 np0005537197 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAprG+HeMPP/aT+AxDbHfD8+gLH9m1B23+gayEqTQzylVLAM3fI65wEhXWzwxetIUt30ClHgpnIW+S+17WQNgxYFmlyMnskUeDFqF3amMqFqwn21P6Cmuf2OQD9xLVbucJs28MciWoqc8GvoTb4YtsBQIWgr6wKeWLVBksG3isPqo1/SkQUcclUIbyD41//TssGzsl9XnfvRntkS9+X3IADNADOwaDBh9HDmxgoL18QpDXOGLPrihtw9pauwcg/luJFiVXFWrEq7ODUH4SrcoAc4RWIKh0OPIzb2+hZ4Irtsv44BnpkIteqKRbh13OZwW+Y4gmzfxOFg3xiOquIx/XzOeOumY1fcVDu+FTzXagwOBL24VNS5vT8nZjY84X9eRM0GrNQHIxhw1/E6Y+JH7J12thwIGK8PHdRqtRlRXS1OGpaP+pGS/iR37noqG8BV24j8rxcfN/y8ssfj5c3ft611jp8iFLcPFNGjpQZIBcAqbHZWDztuRr+67zWKtvssE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:35 np0005537197 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:36 np0005537197 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:17:36 np0005537197 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764195455.7952125-207-169613794980832/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=921d217a91eb4e5499eb4629c470ef78_id_rsa follow=False checksum=d968db03a9e68ae99b92dd6aeb8256916af2fa6e backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:37 np0005537197 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:17:37 np0005537197 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764195456.8387623-240-172156635881247/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=921d217a91eb4e5499eb4629c470ef78_id_rsa.pub follow=False checksum=8cdafc05275aa02f706edfe61438c48fa15c133c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:39 np0005537197 python3[4971]: ansible-ping Invoked with data=pong
Nov 26 17:17:39 np0005537197 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:17:41 np0005537197 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 26 17:17:42 np0005537197 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:42 np0005537197 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:43 np0005537197 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:43 np0005537197 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:43 np0005537197 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:43 np0005537197 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:45 np0005537197 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:46 np0005537197 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:17:46 np0005537197 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764195465.6498032-21-278448160615404/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:47 np0005537197 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:47 np0005537197 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:47 np0005537197 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:48 np0005537197 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:48 np0005537197 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:48 np0005537197 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:49 np0005537197 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:49 np0005537197 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:49 np0005537197 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:49 np0005537197 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:50 np0005537197 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:50 np0005537197 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:50 np0005537197 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:51 np0005537197 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:51 np0005537197 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:51 np0005537197 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:51 np0005537197 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:52 np0005537197 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:52 np0005537197 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:52 np0005537197 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:53 np0005537197 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:53 np0005537197 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:53 np0005537197 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:53 np0005537197 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:54 np0005537197 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:54 np0005537197 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:17:56 np0005537197 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 17:17:56 np0005537197 systemd[1]: Starting Time & Date Service...
Nov 26 17:17:56 np0005537197 systemd[1]: Started Time & Date Service.
Nov 26 17:17:57 np0005537197 systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Nov 26 17:17:58 np0005537197 python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:58 np0005537197 python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:17:59 np0005537197 python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764195478.6811395-153-148721305969942/source _original_basename=tmpfk4rxqnh follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:17:59 np0005537197 python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:18:00 np0005537197 python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764195479.6139014-183-113327052953355/source _original_basename=tmpfj18pk8t follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:18:01 np0005537197 python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:18:01 np0005537197 python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764195480.7721636-231-141995168314851/source _original_basename=tmprxgdh0o4 follow=False checksum=bd04c4e2bbffb15439bf671f57e577cfe66c7fe6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:18:02 np0005537197 python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:18:02 np0005537197 python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:18:02 np0005537197 python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:18:03 np0005537197 python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764195482.5075898-273-74903392713880/source _original_basename=tmpet8ibg94 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:18:03 np0005537197 python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-8766-0830-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:18:04 np0005537197 python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-8766-0830-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 26 17:18:05 np0005537197 python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:18:23 np0005537197 python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:18:27 np0005537197 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 26 17:18:58 np0005537197 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 26 17:18:58 np0005537197 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7159] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 17:18:58 np0005537197 systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7417] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7456] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7461] device (eth1): carrier: link connected
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7463] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7469] policy: auto-activating connection 'Wired connection 1' (5c3b9889-44f2-341d-a218-5a4a108bc318)
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7474] device (eth1): Activation: starting connection 'Wired connection 1' (5c3b9889-44f2-341d-a218-5a4a108bc318)
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7475] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7478] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7483] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:18:58 np0005537197 NetworkManager[859]: <info>  [1764195538.7487] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:18:59 np0005537197 python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-837f-e1a0-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:19:06 np0005537197 python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:19:07 np0005537197 python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764195546.5432813-102-116030514540924/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=013f61a3506005b0f1a01f7e96af1d0b4d436d95 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:19:08 np0005537197 python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:19:08 np0005537197 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 17:19:08 np0005537197 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 17:19:08 np0005537197 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 17:19:08 np0005537197 systemd[1]: Stopping Network Manager...
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3016] caught SIGTERM, shutting down normally.
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3028] dhcp4 (eth0): canceled DHCP transaction
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3030] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3031] dhcp4 (eth0): state changed no lease
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3036] manager: NetworkManager state is now CONNECTING
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3141] dhcp4 (eth1): canceled DHCP transaction
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3143] dhcp4 (eth1): state changed no lease
Nov 26 17:19:08 np0005537197 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 17:19:08 np0005537197 NetworkManager[859]: <info>  [1764195548.3210] exiting (success)
Nov 26 17:19:08 np0005537197 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 17:19:08 np0005537197 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 17:19:08 np0005537197 systemd[1]: Stopped Network Manager.
Nov 26 17:19:08 np0005537197 systemd[1]: NetworkManager.service: Consumed 1.458s CPU time, 10.0M memory peak.
Nov 26 17:19:08 np0005537197 systemd[1]: Starting Network Manager...
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.3874] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3c62c5e9-a0a5-407a-900d-a0335b249ae4)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.3878] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.3947] manager[0x562a9feb9070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 17:19:08 np0005537197 systemd[1]: Starting Hostname Service...
Nov 26 17:19:08 np0005537197 systemd[1]: Started Hostname Service.
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5144] hostname: hostname: using hostnamed
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5145] hostname: static hostname changed from (none) to "np0005537197.novalocal"
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5155] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5164] manager[0x562a9feb9070]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5164] manager[0x562a9feb9070]: rfkill: WWAN hardware radio set enabled
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5217] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5218] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5219] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5220] manager: Networking is enabled by state file
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5224] settings: Loaded settings plugin: keyfile (internal)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5234] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5267] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5284] dhcp: init: Using DHCP client 'internal'
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5289] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5298] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5308] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5323] device (lo): Activation: starting connection 'lo' (ea7e87d8-c88d-44a1-b899-586bba8705c2)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5335] device (eth0): carrier: link connected
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5342] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5351] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5352] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5364] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5376] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5388] device (eth1): carrier: link connected
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5395] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5405] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (5c3b9889-44f2-341d-a218-5a4a108bc318) (indicated)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5406] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5416] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5427] device (eth1): Activation: starting connection 'Wired connection 1' (5c3b9889-44f2-341d-a218-5a4a108bc318)
Nov 26 17:19:08 np0005537197 systemd[1]: Started Network Manager.
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5438] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5446] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5450] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5454] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5458] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5464] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5468] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5474] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5479] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5492] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5498] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5515] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5520] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5553] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5562] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5575] device (lo): Activation: successful, device activated.
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5591] dhcp4 (eth0): state changed new lease, address=38.102.83.156
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5605] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 17:19:08 np0005537197 systemd[1]: Starting Network Manager Wait Online...
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5776] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5797] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5799] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5803] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5807] device (eth0): Activation: successful, device activated.
Nov 26 17:19:08 np0005537197 NetworkManager[7182]: <info>  [1764195548.5814] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 17:19:08 np0005537197 python3[7258]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-837f-e1a0-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:19:18 np0005537197 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 17:19:38 np0005537197 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 17:19:53 np0005537197 systemd[4300]: Starting Mark boot as successful...
Nov 26 17:19:53 np0005537197 systemd[4300]: Finished Mark boot as successful.
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.2841] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 17:19:54 np0005537197 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 17:19:54 np0005537197 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3213] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3223] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3249] device (eth1): Activation: successful, device activated.
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3263] manager: startup complete
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3267] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <warn>  [1764195594.3286] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3296] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 26 17:19:54 np0005537197 systemd[1]: Finished Network Manager Wait Online.
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3411] dhcp4 (eth1): canceled DHCP transaction
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3412] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3412] dhcp4 (eth1): state changed no lease
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3446] policy: auto-activating connection 'ci-private-network' (b01c471b-9386-5d78-b9ae-af7b0380978b)
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3455] device (eth1): Activation: starting connection 'ci-private-network' (b01c471b-9386-5d78-b9ae-af7b0380978b)
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3457] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3462] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3477] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3493] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3544] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3547] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:19:54 np0005537197 NetworkManager[7182]: <info>  [1764195594.3558] device (eth1): Activation: successful, device activated.
Nov 26 17:20:04 np0005537197 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 17:20:09 np0005537197 systemd-logind[819]: Session 1 logged out. Waiting for processes to exit.
Nov 26 17:20:10 np0005537197 systemd-logind[819]: New session 3 of user zuul.
Nov 26 17:20:10 np0005537197 systemd[1]: Started Session 3 of User zuul.
Nov 26 17:20:10 np0005537197 python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:20:11 np0005537197 python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764195610.2310944-259-174738712245988/source _original_basename=tmpajyql54h follow=False checksum=0adda94870a6549b6308aae872a1c03be71ed385 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:20:13 np0005537197 systemd[1]: session-3.scope: Deactivated successfully.
Nov 26 17:20:13 np0005537197 systemd[1]: session-3.scope: Consumed 1.026s CPU time.
Nov 26 17:20:13 np0005537197 systemd-logind[819]: Session 3 logged out. Waiting for processes to exit.
Nov 26 17:20:13 np0005537197 systemd-logind[819]: Removed session 3.
Nov 26 17:22:53 np0005537197 systemd[4300]: Created slice User Background Tasks Slice.
Nov 26 17:22:53 np0005537197 systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Nov 26 17:22:53 np0005537197 systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Nov 26 17:25:41 np0005537197 systemd-logind[819]: New session 4 of user zuul.
Nov 26 17:25:41 np0005537197 systemd[1]: Started Session 4 of User zuul.
Nov 26 17:25:41 np0005537197 python3[7502]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dc59-a4be-000000001cde-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:25:41 np0005537197 python3[7530]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:25:42 np0005537197 python3[7557]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:25:42 np0005537197 python3[7583]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:25:42 np0005537197 python3[7609]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:25:43 np0005537197 python3[7635]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:25:43 np0005537197 python3[7713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:25:43 np0005537197 python3[7786]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764195943.2464426-485-38144940068848/source _original_basename=tmpdq3vfxb_ follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:25:44 np0005537197 python3[7836]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 17:25:44 np0005537197 systemd[1]: Reloading.
Nov 26 17:25:44 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:25:46 np0005537197 python3[7891]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 26 17:25:46 np0005537197 python3[7917]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:25:47 np0005537197 python3[7945]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:25:47 np0005537197 python3[7973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:25:47 np0005537197 python3[8001]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:25:48 np0005537197 python3[8028]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-dc59-a4be-000000001ce5-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:25:48 np0005537197 python3[8058]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 17:25:50 np0005537197 systemd[1]: session-4.scope: Deactivated successfully.
Nov 26 17:25:50 np0005537197 systemd[1]: session-4.scope: Consumed 5.001s CPU time.
Nov 26 17:25:50 np0005537197 systemd-logind[819]: Session 4 logged out. Waiting for processes to exit.
Nov 26 17:25:50 np0005537197 systemd-logind[819]: Removed session 4.
Nov 26 17:25:52 np0005537197 systemd-logind[819]: New session 5 of user zuul.
Nov 26 17:25:52 np0005537197 systemd[1]: Started Session 5 of User zuul.
Nov 26 17:25:52 np0005537197 python3[8092]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 26 17:26:09 np0005537197 kernel: SELinux:  Converting 385 SID table entries...
Nov 26 17:26:09 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:26:09 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:26:09 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:26:09 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:26:09 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:26:09 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:26:09 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:26:18 np0005537197 kernel: SELinux:  Converting 385 SID table entries...
Nov 26 17:26:18 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:26:18 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:26:18 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:26:18 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:26:18 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:26:18 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:26:18 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:26:28 np0005537197 kernel: SELinux:  Converting 385 SID table entries...
Nov 26 17:26:28 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:26:28 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:26:28 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:26:28 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:26:28 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:26:28 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:26:28 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:26:30 np0005537197 setsebool[8160]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 26 17:26:30 np0005537197 setsebool[8160]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 26 17:26:42 np0005537197 kernel: SELinux:  Converting 388 SID table entries...
Nov 26 17:26:42 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:26:42 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:26:42 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:26:42 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:26:42 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:26:42 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:26:42 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:27:01 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 17:27:01 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 17:27:01 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 17:27:01 np0005537197 systemd[1]: Reloading.
Nov 26 17:27:01 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:27:01 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 17:27:05 np0005537197 python3[11688]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-4420-5dd9-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:27:06 np0005537197 kernel: evm: overlay not supported
Nov 26 17:27:06 np0005537197 systemd[4300]: Starting D-Bus User Message Bus...
Nov 26 17:27:06 np0005537197 dbus-broker-launch[12398]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 26 17:27:06 np0005537197 systemd[4300]: Started D-Bus User Message Bus.
Nov 26 17:27:06 np0005537197 dbus-broker-launch[12398]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 26 17:27:06 np0005537197 dbus-broker-lau[12398]: Ready
Nov 26 17:27:06 np0005537197 systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 26 17:27:06 np0005537197 systemd[4300]: Created slice Slice /user.
Nov 26 17:27:06 np0005537197 systemd[4300]: podman-12270.scope: unit configures an IP firewall, but not running as root.
Nov 26 17:27:06 np0005537197 systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Nov 26 17:27:06 np0005537197 systemd[4300]: Started podman-12270.scope.
Nov 26 17:27:07 np0005537197 systemd[4300]: Started podman-pause-8a408fdb.scope.
Nov 26 17:27:07 np0005537197 python3[12915]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.66:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.66:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:27:07 np0005537197 python3[12915]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 26 17:27:08 np0005537197 systemd[1]: session-5.scope: Deactivated successfully.
Nov 26 17:27:08 np0005537197 systemd[1]: session-5.scope: Consumed 1min 895ms CPU time.
Nov 26 17:27:08 np0005537197 systemd-logind[819]: Session 5 logged out. Waiting for processes to exit.
Nov 26 17:27:08 np0005537197 systemd-logind[819]: Removed session 5.
Nov 26 17:27:32 np0005537197 systemd-logind[819]: New session 6 of user zuul.
Nov 26 17:27:32 np0005537197 systemd[1]: Started Session 6 of User zuul.
Nov 26 17:27:32 np0005537197 python3[21652]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNclbi/pNNpFYNuKx1RJRUauAL8pSjk3tRI8DcAl1w1/ubzQYoczHINyiVVLQ82/Gaysu0vI8f36d0l91NZI2U0= zuul@np0005537196.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:27:33 np0005537197 python3[21808]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNclbi/pNNpFYNuKx1RJRUauAL8pSjk3tRI8DcAl1w1/ubzQYoczHINyiVVLQ82/Gaysu0vI8f36d0l91NZI2U0= zuul@np0005537196.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:27:34 np0005537197 python3[22064]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005537197.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 26 17:27:34 np0005537197 python3[22293]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNclbi/pNNpFYNuKx1RJRUauAL8pSjk3tRI8DcAl1w1/ubzQYoczHINyiVVLQ82/Gaysu0vI8f36d0l91NZI2U0= zuul@np0005537196.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 26 17:27:34 np0005537197 python3[22569]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:27:35 np0005537197 python3[22827]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764196054.649948-135-246895349473051/source _original_basename=tmp30pmdqt4 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:27:36 np0005537197 python3[23125]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 26 17:27:36 np0005537197 systemd[1]: Starting Hostname Service...
Nov 26 17:27:36 np0005537197 systemd[1]: Started Hostname Service.
Nov 26 17:27:36 np0005537197 systemd-hostnamed[23232]: Changed pretty hostname to 'compute-0'
Nov 26 17:27:36 np0005537197 systemd-hostnamed[23232]: Hostname set to <compute-0> (static)
Nov 26 17:27:36 np0005537197 NetworkManager[7182]: <info>  [1764196056.3945] hostname: static hostname changed from "np0005537197.novalocal" to "compute-0"
Nov 26 17:27:36 np0005537197 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 17:27:36 np0005537197 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 17:27:36 np0005537197 systemd[1]: session-6.scope: Deactivated successfully.
Nov 26 17:27:36 np0005537197 systemd[1]: session-6.scope: Consumed 2.447s CPU time.
Nov 26 17:27:36 np0005537197 systemd-logind[819]: Session 6 logged out. Waiting for processes to exit.
Nov 26 17:27:36 np0005537197 systemd-logind[819]: Removed session 6.
Nov 26 17:27:46 np0005537197 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 17:27:56 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 17:27:56 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 17:27:56 np0005537197 systemd[1]: man-db-cache-update.service: Consumed 1min 1.420s CPU time.
Nov 26 17:27:56 np0005537197 systemd[1]: run-r703306c52c17458b94ab1ef05322d61b.service: Deactivated successfully.
Nov 26 17:28:06 np0005537197 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 17:30:53 np0005537197 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 26 17:30:53 np0005537197 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 26 17:30:53 np0005537197 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 26 17:30:53 np0005537197 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 26 17:32:10 np0005537197 systemd-logind[819]: New session 7 of user zuul.
Nov 26 17:32:10 np0005537197 systemd[1]: Started Session 7 of User zuul.
Nov 26 17:32:11 np0005537197 python3[30009]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:32:12 np0005537197 python3[30125]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:32:13 np0005537197 python3[30198]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764196332.4680402-33559-233348451210513/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:32:13 np0005537197 python3[30224]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:32:14 np0005537197 python3[30297]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764196332.4680402-33559-233348451210513/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:32:14 np0005537197 python3[30323]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:32:14 np0005537197 python3[30396]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764196332.4680402-33559-233348451210513/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:32:15 np0005537197 python3[30422]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:32:15 np0005537197 python3[30495]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764196332.4680402-33559-233348451210513/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:32:15 np0005537197 python3[30521]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:32:16 np0005537197 python3[30594]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764196332.4680402-33559-233348451210513/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:32:16 np0005537197 python3[30620]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:32:16 np0005537197 python3[30693]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764196332.4680402-33559-233348451210513/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:32:16 np0005537197 python3[30719]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 26 17:32:17 np0005537197 python3[30792]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764196332.4680402-33559-233348451210513/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:34:58 np0005537197 python3[30852]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:39:57 np0005537197 systemd[1]: session-7.scope: Deactivated successfully.
Nov 26 17:39:57 np0005537197 systemd[1]: session-7.scope: Consumed 5.499s CPU time.
Nov 26 17:39:57 np0005537197 systemd-logind[819]: Session 7 logged out. Waiting for processes to exit.
Nov 26 17:39:57 np0005537197 systemd-logind[819]: Removed session 7.
Nov 26 17:48:00 np0005537197 systemd-logind[819]: New session 8 of user zuul.
Nov 26 17:48:00 np0005537197 systemd[1]: Started Session 8 of User zuul.
Nov 26 17:48:01 np0005537197 python3.9[31013]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:48:02 np0005537197 python3.9[31194]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:48:10 np0005537197 systemd[1]: session-8.scope: Deactivated successfully.
Nov 26 17:48:10 np0005537197 systemd[1]: session-8.scope: Consumed 7.677s CPU time.
Nov 26 17:48:10 np0005537197 systemd-logind[819]: Session 8 logged out. Waiting for processes to exit.
Nov 26 17:48:10 np0005537197 systemd-logind[819]: Removed session 8.
Nov 26 17:48:16 np0005537197 systemd-logind[819]: New session 9 of user zuul.
Nov 26 17:48:16 np0005537197 systemd[1]: Started Session 9 of User zuul.
Nov 26 17:48:17 np0005537197 python3.9[31406]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:48:17 np0005537197 systemd[1]: session-9.scope: Deactivated successfully.
Nov 26 17:48:17 np0005537197 systemd-logind[819]: Session 9 logged out. Waiting for processes to exit.
Nov 26 17:48:17 np0005537197 systemd-logind[819]: Removed session 9.
Nov 26 17:48:33 np0005537197 systemd-logind[819]: New session 10 of user zuul.
Nov 26 17:48:33 np0005537197 systemd[1]: Started Session 10 of User zuul.
Nov 26 17:48:34 np0005537197 python3.9[31587]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 26 17:48:35 np0005537197 python3.9[31761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:48:36 np0005537197 python3.9[31913]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:48:38 np0005537197 python3.9[32066]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:48:39 np0005537197 python3.9[32218]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:48:40 np0005537197 python3.9[32370]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:48:41 np0005537197 python3.9[32493]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197319.7454183-73-234709612216943/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:48:42 np0005537197 python3.9[32645]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:48:43 np0005537197 python3.9[32801]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:48:43 np0005537197 python3.9[32953]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:48:44 np0005537197 python3.9[33103]: ansible-ansible.builtin.service_facts Invoked
Nov 26 17:48:49 np0005537197 python3.9[33356]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:48:50 np0005537197 python3.9[33506]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:48:51 np0005537197 python3.9[33660]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:48:52 np0005537197 python3.9[33818]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:48:53 np0005537197 python3.9[33902]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:49:34 np0005537197 systemd[1]: Reloading.
Nov 26 17:49:34 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:49:34 np0005537197 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 26 17:49:35 np0005537197 systemd[1]: Reloading.
Nov 26 17:49:35 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:49:35 np0005537197 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 26 17:49:35 np0005537197 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 26 17:49:35 np0005537197 systemd[1]: Reloading.
Nov 26 17:49:35 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:49:35 np0005537197 systemd[1]: Starting dnf makecache...
Nov 26 17:49:35 np0005537197 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 26 17:49:36 np0005537197 dnf[34188]: Failed determining last makecache time.
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-barbican-42b4c41831408a8e323 131 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 163 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-cinder-1c00d6490d88e436f26ef 161 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dbus-broker-launch[785]: Noticed file-system modification, trigger reload.
Nov 26 17:49:36 np0005537197 dbus-broker-launch[785]: Noticed file-system modification, trigger reload.
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-python-stevedore-c4acc5639fd2329372142 187 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dbus-broker-launch[785]: Noticed file-system modification, trigger reload.
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-python-cloudkitty-tests-tempest-2c80f8 178 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-os-net-config-9758ab42364673d01bc5014e 195 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 166 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-python-designate-tests-tempest-347fdbc 167 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-glance-1fd12c29b339f30fe823e 172 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 169 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-manila-3c01b7181572c95dac462 175 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-python-whitebox-neutron-tests-tempest- 171 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-octavia-ba397f07a7331190208c 174 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-watcher-c014f81a8647287f6dcc 177 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-python-tcib-1124124ec06aadbac34f0d340b 169 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 172 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-swift-dc98a8463506ac520c469a 150 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-python-tempestconf-8515371b7cceebd4282 164 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: delorean-openstack-heat-ui-013accbfd179753bc3f0 166 kB/s | 3.0 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: CentOS Stream 9 - BaseOS                         86 kB/s | 7.3 kB     00:00
Nov 26 17:49:36 np0005537197 dnf[34188]: CentOS Stream 9 - AppStream                      29 kB/s | 7.4 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: CentOS Stream 9 - CRB                            27 kB/s | 7.2 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: CentOS Stream 9 - Extras packages                70 kB/s | 8.3 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: dlrn-antelope-testing                           146 kB/s | 3.0 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: dlrn-antelope-build-deps                        159 kB/s | 3.0 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: centos9-rabbitmq                                 75 kB/s | 3.0 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: centos9-storage                                 108 kB/s | 3.0 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: centos9-opstools                                124 kB/s | 3.0 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: NFV SIG OpenvSwitch                             118 kB/s | 3.0 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: repo-setup-centos-appstream                     139 kB/s | 4.4 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: repo-setup-centos-baseos                         88 kB/s | 3.9 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: repo-setup-centos-highavailability              133 kB/s | 3.9 kB     00:00
Nov 26 17:49:37 np0005537197 dnf[34188]: repo-setup-centos-powertools                    159 kB/s | 4.3 kB     00:00
Nov 26 17:49:38 np0005537197 dnf[34188]: Extra Packages for Enterprise Linux 9 - x86_64  296 kB/s |  35 kB     00:00
Nov 26 17:49:38 np0005537197 dnf[34188]: Metadata cache created.
Nov 26 17:49:38 np0005537197 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 26 17:49:38 np0005537197 systemd[1]: Finished dnf makecache.
Nov 26 17:49:38 np0005537197 systemd[1]: dnf-makecache.service: Consumed 1.703s CPU time.
Nov 26 17:50:37 np0005537197 kernel: SELinux:  Converting 2718 SID table entries...
Nov 26 17:50:37 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:50:37 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:50:37 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:50:37 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:50:37 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:50:37 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:50:37 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:50:37 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 26 17:50:37 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 17:50:37 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 17:50:37 np0005537197 systemd[1]: Reloading.
Nov 26 17:50:37 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:50:38 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 17:50:39 np0005537197 python3.9[35370]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:50:39 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 17:50:39 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 17:50:39 np0005537197 systemd[1]: man-db-cache-update.service: Consumed 1.445s CPU time.
Nov 26 17:50:39 np0005537197 systemd[1]: run-rc2a294f89dc7475988767acfad4f72fb.service: Deactivated successfully.
Nov 26 17:50:41 np0005537197 python3.9[35743]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 26 17:50:42 np0005537197 python3.9[35895]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 26 17:50:44 np0005537197 python3.9[36048]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:50:45 np0005537197 python3.9[36200]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 26 17:50:46 np0005537197 python3.9[36352]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:50:47 np0005537197 python3.9[36504]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:50:48 np0005537197 python3.9[36627]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197447.180274-236-271312214767427/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:50:51 np0005537197 python3.9[36779]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:50:52 np0005537197 python3.9[36931]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:50:53 np0005537197 python3.9[37084]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:50:54 np0005537197 python3.9[37236]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 26 17:50:54 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 17:50:55 np0005537197 python3.9[37390]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 17:50:57 np0005537197 python3.9[37548]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 17:50:58 np0005537197 python3.9[37708]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 26 17:50:58 np0005537197 python3.9[37861]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 17:50:59 np0005537197 python3.9[38019]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 26 17:51:00 np0005537197 python3.9[38171]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:51:03 np0005537197 python3.9[38324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:04 np0005537197 python3.9[38476]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:51:04 np0005537197 python3.9[38599]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764197463.4490337-355-22732394782575/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:06 np0005537197 python3.9[38751]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:51:06 np0005537197 systemd[1]: Starting Load Kernel Modules...
Nov 26 17:51:06 np0005537197 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 26 17:51:06 np0005537197 kernel: Bridge firewalling registered
Nov 26 17:51:06 np0005537197 systemd-modules-load[38755]: Inserted module 'br_netfilter'
Nov 26 17:51:06 np0005537197 systemd[1]: Finished Load Kernel Modules.
Nov 26 17:51:07 np0005537197 python3.9[38910]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:51:07 np0005537197 python3.9[39033]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764197466.5828931-378-145691472150987/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:08 np0005537197 python3.9[39185]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:51:12 np0005537197 dbus-broker-launch[785]: Noticed file-system modification, trigger reload.
Nov 26 17:51:12 np0005537197 dbus-broker-launch[785]: Noticed file-system modification, trigger reload.
Nov 26 17:51:12 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 17:51:12 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 17:51:12 np0005537197 systemd[1]: Reloading.
Nov 26 17:51:12 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:51:12 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 17:51:14 np0005537197 python3.9[40386]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:51:14 np0005537197 python3.9[41231]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 26 17:51:15 np0005537197 python3.9[41934]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:51:16 np0005537197 python3.9[42868]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:51:16 np0005537197 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 17:51:17 np0005537197 systemd[1]: Starting Authorization Manager...
Nov 26 17:51:17 np0005537197 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 17:51:17 np0005537197 polkitd[43569]: Started polkitd version 0.117
Nov 26 17:51:17 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 17:51:17 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 17:51:17 np0005537197 systemd[1]: man-db-cache-update.service: Consumed 6.240s CPU time.
Nov 26 17:51:17 np0005537197 systemd[1]: run-r9ccdaf45ec6d422aaae335098f54327f.service: Deactivated successfully.
Nov 26 17:51:17 np0005537197 systemd[1]: Started Authorization Manager.
Nov 26 17:51:18 np0005537197 python3.9[43740]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:51:18 np0005537197 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 26 17:51:18 np0005537197 systemd[1]: tuned.service: Deactivated successfully.
Nov 26 17:51:18 np0005537197 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 26 17:51:18 np0005537197 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 26 17:51:18 np0005537197 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 26 17:51:19 np0005537197 python3.9[43902]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 26 17:51:22 np0005537197 python3.9[44054]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:51:22 np0005537197 systemd[1]: Reloading.
Nov 26 17:51:22 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:51:23 np0005537197 python3.9[44244]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:51:23 np0005537197 systemd[1]: Reloading.
Nov 26 17:51:23 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:51:24 np0005537197 python3.9[44432]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:51:25 np0005537197 python3.9[44585]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:51:25 np0005537197 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 26 17:51:26 np0005537197 python3.9[44738]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:51:28 np0005537197 python3.9[44900]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:51:29 np0005537197 python3.9[45053]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:51:29 np0005537197 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 26 17:51:29 np0005537197 systemd[1]: Stopped Apply Kernel Variables.
Nov 26 17:51:29 np0005537197 systemd[1]: Stopping Apply Kernel Variables...
Nov 26 17:51:29 np0005537197 systemd[1]: Starting Apply Kernel Variables...
Nov 26 17:51:29 np0005537197 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 26 17:51:30 np0005537197 systemd[1]: Finished Apply Kernel Variables.
Nov 26 17:51:30 np0005537197 systemd[1]: session-10.scope: Deactivated successfully.
Nov 26 17:51:30 np0005537197 systemd[1]: session-10.scope: Consumed 2min 18.153s CPU time.
Nov 26 17:51:30 np0005537197 systemd-logind[819]: Session 10 logged out. Waiting for processes to exit.
Nov 26 17:51:30 np0005537197 systemd-logind[819]: Removed session 10.
Nov 26 17:51:36 np0005537197 systemd-logind[819]: New session 11 of user zuul.
Nov 26 17:51:36 np0005537197 systemd[1]: Started Session 11 of User zuul.
Nov 26 17:51:38 np0005537197 python3.9[45236]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:51:39 np0005537197 python3.9[45390]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:51:41 np0005537197 python3.9[45546]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:51:42 np0005537197 python3.9[45697]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:51:43 np0005537197 python3.9[45853]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:51:44 np0005537197 python3.9[45937]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:51:46 np0005537197 python3.9[46090]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:51:47 np0005537197 python3.9[46261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:51:48 np0005537197 python3.9[46413]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:51:48 np0005537197 systemd[1]: var-lib-containers-storage-overlay-compat542242076-merged.mount: Deactivated successfully.
Nov 26 17:51:48 np0005537197 podman[46414]: 2025-11-26 22:51:48.351572229 +0000 UTC m=+0.061279555 system refresh
Nov 26 17:51:49 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:51:49 np0005537197 python3.9[46577]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:51:50 np0005537197 python3.9[46700]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197508.627099-109-141584749299085/.source.json follow=False _original_basename=podman_network_config.j2 checksum=01b7227c5cb6ff292327bd323d1d603b55688ffe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:51:50 np0005537197 python3.9[46852]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:51:51 np0005537197 python3.9[46975]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764197510.3789353-124-98900785987409/.source.conf follow=False _original_basename=registries.conf.j2 checksum=888b975826b2c6c0439200ce8ac9219b96c0abdf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:52 np0005537197 python3.9[47127]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:53 np0005537197 python3.9[47279]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:54 np0005537197 python3.9[47431]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:54 np0005537197 python3.9[47583]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:51:55 np0005537197 python3.9[47733]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:51:56 np0005537197 python3.9[47887]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:51:58 np0005537197 python3.9[48040]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:01 np0005537197 python3.9[48201]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:03 np0005537197 python3.9[48354]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:06 np0005537197 python3.9[48507]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:08 np0005537197 python3.9[48663]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:13 np0005537197 python3.9[48833]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:15 np0005537197 python3.9[48986]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:31 np0005537197 python3.9[49323]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:52:33 np0005537197 python3.9[49479]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:52:34 np0005537197 python3.9[49654]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:52:34 np0005537197 python3.9[49777]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764197553.59594-272-125488046829911/.source.json _original_basename=.ms2n4c2s follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:52:36 np0005537197 python3.9[49929]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:52:36 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:38 np0005537197 systemd[1]: var-lib-containers-storage-overlay-compat1102900831-lower\x2dmapped.mount: Deactivated successfully.
Nov 26 17:52:41 np0005537197 podman[49941]: 2025-11-26 22:52:41.916528438 +0000 UTC m=+5.751723992 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 26 17:52:41 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:41 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:41 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:43 np0005537197 python3.9[50243]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:52:43 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:52 np0005537197 podman[50255]: 2025-11-26 22:52:52.82057404 +0000 UTC m=+9.671135035 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 17:52:52 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:52 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:52 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:53 np0005537197 python3.9[50551]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:52:54 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:55 np0005537197 podman[50562]: 2025-11-26 22:52:55.202326936 +0000 UTC m=+1.178472168 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 26 17:52:55 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:55 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:55 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:52:56 np0005537197 python3.9[50795]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:52:56 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:05 np0005537197 podman[50807]: 2025-11-26 22:53:05.594477711 +0000 UTC m=+9.143098378 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 26 17:53:05 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:05 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:05 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:06 np0005537197 python3.9[51081]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:53:07 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:18 np0005537197 podman[51093]: 2025-11-26 22:53:18.992873869 +0000 UTC m=+11.951108900 image pull 64a16ed7692810b1a8f0a7e67b7d8c7ca1d63d1a94542312fec7e65db8b42eda quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 26 17:53:18 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:19 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:19 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:20 np0005537197 python3.9[51416]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:53:20 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:21 np0005537197 podman[51428]: 2025-11-26 22:53:21.780190341 +0000 UTC m=+1.651826700 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 26 17:53:21 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:21 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:21 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:23 np0005537197 python3.9[51704]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:53:23 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:28 np0005537197 podman[51715]: 2025-11-26 22:53:28.237989784 +0000 UTC m=+5.119610222 image pull 743c1960518ee2a8df257b87dd40a31faa57a99c6d0aa394baae4cd418c3c2b2 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 26 17:53:28 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:28 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:28 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:29 np0005537197 python3.9[51973]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 26 17:53:35 np0005537197 podman[51985]: 2025-11-26 22:53:35.407190681 +0000 UTC m=+6.204562958 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 26 17:53:35 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:35 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:35 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:53:36 np0005537197 systemd[1]: session-11.scope: Deactivated successfully.
Nov 26 17:53:36 np0005537197 systemd[1]: session-11.scope: Consumed 2min 33.881s CPU time.
Nov 26 17:53:36 np0005537197 systemd-logind[819]: Session 11 logged out. Waiting for processes to exit.
Nov 26 17:53:36 np0005537197 systemd-logind[819]: Removed session 11.
Nov 26 17:53:41 np0005537197 systemd-logind[819]: New session 12 of user zuul.
Nov 26 17:53:41 np0005537197 systemd[1]: Started Session 12 of User zuul.
Nov 26 17:53:43 np0005537197 python3.9[52387]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:53:44 np0005537197 python3.9[52543]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 26 17:53:45 np0005537197 python3.9[52696]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 17:53:46 np0005537197 python3.9[52854]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 17:53:47 np0005537197 python3.9[53014]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:53:48 np0005537197 python3.9[53098]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:53:50 np0005537197 python3.9[53259]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:54:02 np0005537197 kernel: SELinux:  Converting 2731 SID table entries...
Nov 26 17:54:02 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:54:02 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:54:02 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:54:02 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:54:02 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:54:02 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:54:02 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:54:03 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 26 17:54:03 np0005537197 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 26 17:54:04 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 17:54:04 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 17:54:04 np0005537197 systemd[1]: Reloading.
Nov 26 17:54:04 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:54:04 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:54:05 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 17:54:05 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 17:54:05 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 17:54:05 np0005537197 systemd[1]: man-db-cache-update.service: Consumed 1.020s CPU time.
Nov 26 17:54:05 np0005537197 systemd[1]: run-redd022063fb248749b820a7d4d9916ef.service: Deactivated successfully.
Nov 26 17:54:06 np0005537197 python3.9[54357]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 17:54:06 np0005537197 systemd[1]: Reloading.
Nov 26 17:54:07 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:54:07 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:54:07 np0005537197 systemd[1]: Starting Open vSwitch Database Unit...
Nov 26 17:54:07 np0005537197 chown[54399]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 26 17:54:07 np0005537197 ovs-ctl[54404]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 26 17:54:07 np0005537197 ovs-ctl[54404]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 26 17:54:07 np0005537197 ovs-ctl[54404]: Starting ovsdb-server [  OK  ]
Nov 26 17:54:07 np0005537197 ovs-vsctl[54453]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 26 17:54:07 np0005537197 ovs-vsctl[54473]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"bbd59242-3683-4df7-8a2a-12b2eb702783\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 26 17:54:07 np0005537197 ovs-ctl[54404]: Configuring Open vSwitch system IDs [  OK  ]
Nov 26 17:54:07 np0005537197 ovs-ctl[54404]: Enabling remote OVSDB managers [  OK  ]
Nov 26 17:54:07 np0005537197 systemd[1]: Started Open vSwitch Database Unit.
Nov 26 17:54:07 np0005537197 ovs-vsctl[54479]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 17:54:07 np0005537197 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 26 17:54:07 np0005537197 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 26 17:54:07 np0005537197 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 26 17:54:07 np0005537197 kernel: openvswitch: Open vSwitch switching datapath
Nov 26 17:54:07 np0005537197 ovs-ctl[54523]: Inserting openvswitch module [  OK  ]
Nov 26 17:54:07 np0005537197 ovs-ctl[54492]: Starting ovs-vswitchd [  OK  ]
Nov 26 17:54:08 np0005537197 ovs-vsctl[54540]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 26 17:54:08 np0005537197 ovs-ctl[54492]: Enabling remote OVSDB managers [  OK  ]
Nov 26 17:54:08 np0005537197 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 26 17:54:08 np0005537197 systemd[1]: Starting Open vSwitch...
Nov 26 17:54:08 np0005537197 systemd[1]: Finished Open vSwitch.
Nov 26 17:54:09 np0005537197 python3.9[54692]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:54:10 np0005537197 python3.9[54844]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 26 17:54:11 np0005537197 kernel: SELinux:  Converting 2745 SID table entries...
Nov 26 17:54:11 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 17:54:11 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 17:54:11 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 17:54:11 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 17:54:11 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 17:54:11 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 17:54:11 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 17:54:12 np0005537197 python3.9[54999]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:54:13 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 26 17:54:13 np0005537197 python3.9[55157]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:54:15 np0005537197 python3.9[55310]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:54:17 np0005537197 python3.9[55597]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 17:54:18 np0005537197 python3.9[55747]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:54:19 np0005537197 python3.9[55901]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:54:21 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 17:54:21 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 17:54:21 np0005537197 systemd[1]: Reloading.
Nov 26 17:54:21 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:54:21 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:54:21 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 17:54:22 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 17:54:22 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 17:54:22 np0005537197 systemd[1]: run-r9953a547ceee4659b8ea7971af131c41.service: Deactivated successfully.
Nov 26 17:54:23 np0005537197 python3.9[56218]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:54:23 np0005537197 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 26 17:54:23 np0005537197 systemd[1]: Stopped Network Manager Wait Online.
Nov 26 17:54:23 np0005537197 systemd[1]: Stopping Network Manager Wait Online...
Nov 26 17:54:23 np0005537197 systemd[1]: Stopping Network Manager...
Nov 26 17:54:23 np0005537197 NetworkManager[7182]: <info>  [1764197663.1999] caught SIGTERM, shutting down normally.
Nov 26 17:54:23 np0005537197 NetworkManager[7182]: <info>  [1764197663.2018] dhcp4 (eth0): canceled DHCP transaction
Nov 26 17:54:23 np0005537197 NetworkManager[7182]: <info>  [1764197663.2018] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:54:23 np0005537197 NetworkManager[7182]: <info>  [1764197663.2018] dhcp4 (eth0): state changed no lease
Nov 26 17:54:23 np0005537197 NetworkManager[7182]: <info>  [1764197663.2022] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 17:54:23 np0005537197 NetworkManager[7182]: <info>  [1764197663.2127] exiting (success)
Nov 26 17:54:23 np0005537197 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 17:54:23 np0005537197 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 17:54:23 np0005537197 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 26 17:54:23 np0005537197 systemd[1]: Stopped Network Manager.
Nov 26 17:54:23 np0005537197 systemd[1]: NetworkManager.service: Consumed 13.544s CPU time, 4.3M memory peak, read 0B from disk, written 11.5K to disk.
Nov 26 17:54:23 np0005537197 systemd[1]: Starting Network Manager...
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.2736] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:3c62c5e9-a0a5-407a-900d-a0335b249ae4)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.2739] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.2787] manager[0x560affcd7090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 26 17:54:23 np0005537197 systemd[1]: Starting Hostname Service...
Nov 26 17:54:23 np0005537197 systemd[1]: Started Hostname Service.
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3916] hostname: hostname: using hostnamed
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3917] hostname: static hostname changed from (none) to "compute-0"
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3920] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3924] manager[0x560affcd7090]: rfkill: Wi-Fi hardware radio set enabled
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3924] manager[0x560affcd7090]: rfkill: WWAN hardware radio set enabled
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3941] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3949] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3949] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3950] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3950] manager: Networking is enabled by state file
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3951] settings: Loaded settings plugin: keyfile (internal)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3954] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3976] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3985] dhcp: init: Using DHCP client 'internal'
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3987] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3992] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.3996] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4001] device (lo): Activation: starting connection 'lo' (ea7e87d8-c88d-44a1-b899-586bba8705c2)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4006] device (eth0): carrier: link connected
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4009] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4012] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4012] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4016] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4022] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4025] device (eth1): carrier: link connected
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4028] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4031] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b01c471b-9386-5d78-b9ae-af7b0380978b) (indicated)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4032] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4035] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4039] device (eth1): Activation: starting connection 'ci-private-network' (b01c471b-9386-5d78-b9ae-af7b0380978b)
Nov 26 17:54:23 np0005537197 systemd[1]: Started Network Manager.
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4047] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4053] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4055] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4056] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4058] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4060] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4062] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4065] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4067] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4072] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4074] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4080] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4089] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4111] dhcp4 (eth0): state changed new lease, address=38.102.83.156
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4116] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4203] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4209] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4211] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4212] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4217] device (lo): Activation: successful, device activated.
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4222] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4225] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4228] device (eth1): Activation: successful, device activated.
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4241] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4242] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4246] manager: NetworkManager state is now CONNECTED_SITE
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4249] device (eth0): Activation: successful, device activated.
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4254] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 26 17:54:23 np0005537197 NetworkManager[56227]: <info>  [1764197663.4257] manager: startup complete
Nov 26 17:54:23 np0005537197 systemd[1]: Starting Network Manager Wait Online...
Nov 26 17:54:23 np0005537197 systemd[1]: Finished Network Manager Wait Online.
Nov 26 17:54:24 np0005537197 python3.9[56445]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:54:33 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 17:54:33 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 17:54:33 np0005537197 systemd[1]: Reloading.
Nov 26 17:54:33 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:54:33 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:54:33 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 17:54:33 np0005537197 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 17:54:34 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 17:54:34 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 17:54:34 np0005537197 systemd[1]: run-ra7391464f5d14de3b3617910880b845c.service: Deactivated successfully.
Nov 26 17:54:35 np0005537197 python3.9[56903]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:54:36 np0005537197 python3.9[57055]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:37 np0005537197 python3.9[57209]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:38 np0005537197 python3.9[57361]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:39 np0005537197 python3.9[57513]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:40 np0005537197 python3.9[57665]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:41 np0005537197 python3.9[57817]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:54:42 np0005537197 python3.9[57940]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197680.6336136-229-7789080818671/.source _original_basename=.91v_08l_ follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:42 np0005537197 python3.9[58092]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:43 np0005537197 python3.9[58244]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 26 17:54:44 np0005537197 python3.9[58396]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:47 np0005537197 python3.9[58823]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 26 17:54:48 np0005537197 ansible-async_wrapper.py[58998]: Invoked with j795554820487 300 /home/zuul/.ansible/tmp/ansible-tmp-1764197687.7367704-295-131522508965141/AnsiballZ_edpm_os_net_config.py _
Nov 26 17:54:48 np0005537197 ansible-async_wrapper.py[59001]: Starting module and watcher
Nov 26 17:54:48 np0005537197 ansible-async_wrapper.py[59001]: Start watching 59002 (300)
Nov 26 17:54:48 np0005537197 ansible-async_wrapper.py[59002]: Start module (59002)
Nov 26 17:54:48 np0005537197 ansible-async_wrapper.py[58998]: Return async_wrapper task started.
Nov 26 17:54:49 np0005537197 python3.9[59003]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 26 17:54:49 np0005537197 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 26 17:54:49 np0005537197 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 26 17:54:49 np0005537197 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 26 17:54:49 np0005537197 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 26 17:54:49 np0005537197 kernel: cfg80211: failed to load regulatory.db
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.8774] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.8796] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9516] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9518] audit: op="connection-add" uuid="89982a83-6fc6-48fc-a58f-75d6068b73fa" name="br-ex-br" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9543] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9545] audit: op="connection-add" uuid="bb23dff9-927f-46c9-97f1-c6e002c8c236" name="br-ex-port" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9565] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9567] audit: op="connection-add" uuid="fd758446-6543-402b-869a-52e8d9ebeebc" name="eth1-port" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9589] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9591] audit: op="connection-add" uuid="579da89c-1cda-4c76-8f6b-b93c3550ab4b" name="vlan20-port" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9612] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9614] audit: op="connection-add" uuid="64f91c85-b3a8-4289-89cd-c27627c17363" name="vlan21-port" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9634] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9636] audit: op="connection-add" uuid="68cb26bb-c8bc-4b29-bc61-e7b9f759a055" name="vlan22-port" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9669] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9696] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9699] audit: op="connection-add" uuid="3b0233f4-1d4d-43e7-9cfe-1d7689b1897b" name="br-ex-if" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9765] audit: op="connection-update" uuid="b01c471b-9386-5d78-b9ae-af7b0380978b" name="ci-private-network" args="ipv4.addresses,ipv4.routes,ipv4.method,ipv4.dns,ipv4.never-default,ipv4.routing-rules,connection.controller,connection.port-type,connection.master,connection.timestamp,connection.slave-type,ipv6.addresses,ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.method,ipv6.routing-rules,ovs-external-ids.data,ovs-interface.type" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9793] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9795] audit: op="connection-add" uuid="5f4a737c-2683-440c-ae06-d7f3e7e32971" name="vlan20-if" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9822] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9824] audit: op="connection-add" uuid="badaadff-5793-430a-98e8-220704d0752a" name="vlan21-if" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9855] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9857] audit: op="connection-add" uuid="1eca852f-d14c-4708-85e7-607a1ccf0d04" name="vlan22-if" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9877] audit: op="connection-delete" uuid="5c3b9889-44f2-341d-a218-5a4a108bc318" name="Wired connection 1" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9897] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9913] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9919] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (89982a83-6fc6-48fc-a58f-75d6068b73fa)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9921] audit: op="connection-activate" uuid="89982a83-6fc6-48fc-a58f-75d6068b73fa" name="br-ex-br" pid=59004 uid=0 result="success"
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9924] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9935] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9942] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (bb23dff9-927f-46c9-97f1-c6e002c8c236)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9945] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9955] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9962] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (fd758446-6543-402b-869a-52e8d9ebeebc)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9965] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9975] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9983] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (579da89c-1cda-4c76-8f6b-b93c3550ab4b)
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9986] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:50 np0005537197 NetworkManager[56227]: <info>  [1764197690.9997] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0004] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (64f91c85-b3a8-4289-89cd-c27627c17363)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0007] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0017] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0024] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (68cb26bb-c8bc-4b29-bc61-e7b9f759a055)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0025] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0029] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0033] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0043] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0050] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0057] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (3b0233f4-1d4d-43e7-9cfe-1d7689b1897b)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0058] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0064] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0067] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0069] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0071] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0088] device (eth1): disconnecting for new activation request.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0089] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0093] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0097] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0099] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0102] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0107] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0112] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (5f4a737c-2683-440c-ae06-d7f3e7e32971)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0113] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0117] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0120] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0121] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0124] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0130] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0135] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (badaadff-5793-430a-98e8-220704d0752a)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0136] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0140] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0142] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0144] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0147] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0152] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0157] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (1eca852f-d14c-4708-85e7-607a1ccf0d04)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0158] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0162] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0164] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0166] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0168] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0184] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=59004 uid=0 result="success"
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0187] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0191] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0194] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0201] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0206] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0211] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0215] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0217] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 kernel: ovs-system: entered promiscuous mode
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0237] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0243] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 systemd-udevd[59009]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 17:54:51 np0005537197 kernel: Timeout policy base is empty
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0258] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0260] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0266] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0271] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0274] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0276] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0282] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0286] dhcp4 (eth0): canceled DHCP transaction
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0287] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0287] dhcp4 (eth0): state changed no lease
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0288] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0299] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0302] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59004 uid=0 result="fail" reason="Device is not activated"
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0335] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0340] dhcp4 (eth0): state changed new lease, address=38.102.83.156
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0385] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0393] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0400] device (eth1): disconnecting for new activation request.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0401] audit: op="connection-activate" uuid="b01c471b-9386-5d78-b9ae-af7b0380978b" name="ci-private-network" pid=59004 uid=0 result="success"
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0424] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59004 uid=0 result="success"
Nov 26 17:54:51 np0005537197 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0521] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0667] device (eth1): Activation: starting connection 'ci-private-network' (b01c471b-9386-5d78-b9ae-af7b0380978b)
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0677] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0681] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 kernel: br-ex: entered promiscuous mode
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0700] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0701] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0703] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0704] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0705] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0706] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0725] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0730] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0734] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0738] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0741] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0744] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0747] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0750] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0755] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0758] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0761] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0782] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0785] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0790] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0796] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0835] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0837] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0843] device (eth1): Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 kernel: vlan22: entered promiscuous mode
Nov 26 17:54:51 np0005537197 systemd-udevd[59008]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0940] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 26 17:54:51 np0005537197 kernel: vlan21: entered promiscuous mode
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.0965] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 systemd-udevd[59010]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1021] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1023] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 kernel: vlan20: entered promiscuous mode
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1035] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 systemd-udevd[59102]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1066] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 26 17:54:51 np0005537197 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1102] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1140] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1149] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1154] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1161] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1167] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1223] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1223] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1228] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1236] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1253] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1288] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1289] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 26 17:54:51 np0005537197 NetworkManager[56227]: <info>  [1764197691.1296] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 26 17:54:52 np0005537197 NetworkManager[56227]: <info>  [1764197692.2289] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59004 uid=0 result="success"
Nov 26 17:54:52 np0005537197 NetworkManager[56227]: <info>  [1764197692.4462] checkpoint[0x560affcae950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 26 17:54:52 np0005537197 NetworkManager[56227]: <info>  [1764197692.4465] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59004 uid=0 result="success"
Nov 26 17:54:52 np0005537197 NetworkManager[56227]: <info>  [1764197692.7789] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59004 uid=0 result="success"
Nov 26 17:54:52 np0005537197 NetworkManager[56227]: <info>  [1764197692.7803] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59004 uid=0 result="success"
Nov 26 17:54:52 np0005537197 python3.9[59338]: ansible-ansible.legacy.async_status Invoked with jid=j795554820487.58998 mode=status _async_dir=/root/.ansible_async
Nov 26 17:54:53 np0005537197 NetworkManager[56227]: <info>  [1764197693.0204] audit: op="networking-control" arg="global-dns-configuration" pid=59004 uid=0 result="success"
Nov 26 17:54:53 np0005537197 NetworkManager[56227]: <info>  [1764197693.0231] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 26 17:54:53 np0005537197 NetworkManager[56227]: <info>  [1764197693.0262] audit: op="networking-control" arg="global-dns-configuration" pid=59004 uid=0 result="success"
Nov 26 17:54:53 np0005537197 NetworkManager[56227]: <info>  [1764197693.0293] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59004 uid=0 result="success"
Nov 26 17:54:53 np0005537197 NetworkManager[56227]: <info>  [1764197693.2101] checkpoint[0x560affcaea20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 26 17:54:53 np0005537197 NetworkManager[56227]: <info>  [1764197693.2104] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59004 uid=0 result="success"
Nov 26 17:54:53 np0005537197 ansible-async_wrapper.py[59002]: Module complete (59002)
Nov 26 17:54:53 np0005537197 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 26 17:54:53 np0005537197 ansible-async_wrapper.py[59001]: Done in kid B.
Nov 26 17:54:56 np0005537197 python3.9[59447]: ansible-ansible.legacy.async_status Invoked with jid=j795554820487.58998 mode=status _async_dir=/root/.ansible_async
Nov 26 17:54:57 np0005537197 python3.9[59546]: ansible-ansible.legacy.async_status Invoked with jid=j795554820487.58998 mode=cleanup _async_dir=/root/.ansible_async
Nov 26 17:54:57 np0005537197 python3.9[59698]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:54:58 np0005537197 python3.9[59821]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197697.3425612-322-133521236667763/.source.returncode _original_basename=.x11re57p follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:54:59 np0005537197 python3.9[59973]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:55:00 np0005537197 python3.9[60097]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197698.7657483-338-134688657246332/.source.cfg _original_basename=.u80edqh4 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:55:00 np0005537197 python3.9[60249]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:55:01 np0005537197 systemd[1]: Reloading Network Manager...
Nov 26 17:55:01 np0005537197 NetworkManager[56227]: <info>  [1764197701.0456] audit: op="reload" arg="0" pid=60253 uid=0 result="success"
Nov 26 17:55:01 np0005537197 NetworkManager[56227]: <info>  [1764197701.0465] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 26 17:55:01 np0005537197 systemd[1]: Reloaded Network Manager.
Nov 26 17:55:01 np0005537197 systemd[1]: session-12.scope: Deactivated successfully.
Nov 26 17:55:01 np0005537197 systemd[1]: session-12.scope: Consumed 53.904s CPU time.
Nov 26 17:55:01 np0005537197 systemd-logind[819]: Session 12 logged out. Waiting for processes to exit.
Nov 26 17:55:01 np0005537197 systemd-logind[819]: Removed session 12.
Nov 26 17:55:06 np0005537197 systemd-logind[819]: New session 13 of user zuul.
Nov 26 17:55:06 np0005537197 systemd[1]: Started Session 13 of User zuul.
Nov 26 17:55:07 np0005537197 python3.9[60437]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:55:08 np0005537197 python3.9[60591]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:55:10 np0005537197 python3.9[60781]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:55:10 np0005537197 systemd[1]: session-13.scope: Deactivated successfully.
Nov 26 17:55:10 np0005537197 systemd[1]: session-13.scope: Consumed 2.408s CPU time.
Nov 26 17:55:10 np0005537197 systemd-logind[819]: Session 13 logged out. Waiting for processes to exit.
Nov 26 17:55:10 np0005537197 systemd-logind[819]: Removed session 13.
Nov 26 17:55:11 np0005537197 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 26 17:55:15 np0005537197 systemd-logind[819]: New session 14 of user zuul.
Nov 26 17:55:15 np0005537197 systemd[1]: Started Session 14 of User zuul.
Nov 26 17:55:17 np0005537197 python3.9[60963]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:55:18 np0005537197 python3.9[61117]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:55:19 np0005537197 python3.9[61274]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:55:20 np0005537197 python3.9[61358]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:55:22 np0005537197 python3.9[61512]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:55:23 np0005537197 python3.9[61703]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:55:24 np0005537197 python3.9[61855]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:55:24 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:55:26 np0005537197 python3.9[62018]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:55:26 np0005537197 python3.9[62096]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:55:27 np0005537197 python3.9[62248]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:55:28 np0005537197 python3.9[62326]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:55:29 np0005537197 python3.9[62478]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:55:29 np0005537197 python3.9[62630]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:55:30 np0005537197 python3.9[62782]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:55:31 np0005537197 python3.9[62934]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:55:32 np0005537197 python3.9[63086]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:55:34 np0005537197 python3.9[63239]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:55:35 np0005537197 python3.9[63393]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:55:36 np0005537197 python3.9[63545]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:55:37 np0005537197 python3.9[63697]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:55:39 np0005537197 python3.9[63850]: ansible-service_facts Invoked
Nov 26 17:55:40 np0005537197 network[63867]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 17:55:40 np0005537197 network[63868]: 'network-scripts' will be removed from distribution in near future.
Nov 26 17:55:40 np0005537197 network[63869]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 17:55:48 np0005537197 python3.9[64321]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:55:50 np0005537197 python3.9[64474]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 26 17:55:52 np0005537197 python3.9[64626]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:55:53 np0005537197 python3.9[64751]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197751.6513696-232-161125865379574/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:55:54 np0005537197 python3.9[64905]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:55:54 np0005537197 python3.9[65030]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197753.445615-247-190798281617766/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:55:56 np0005537197 python3.9[65184]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:55:58 np0005537197 python3.9[65338]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:55:59 np0005537197 python3.9[65422]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:56:00 np0005537197 python3.9[65576]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:56:01 np0005537197 python3.9[65660]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:56:01 np0005537197 chronyd[831]: chronyd exiting
Nov 26 17:56:01 np0005537197 systemd[1]: Stopping NTP client/server...
Nov 26 17:56:01 np0005537197 systemd[1]: chronyd.service: Deactivated successfully.
Nov 26 17:56:01 np0005537197 systemd[1]: Stopped NTP client/server.
Nov 26 17:56:01 np0005537197 systemd[1]: Starting NTP client/server...
Nov 26 17:56:01 np0005537197 chronyd[65668]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 26 17:56:01 np0005537197 chronyd[65668]: Frequency -25.994 +/- 0.269 ppm read from /var/lib/chrony/drift
Nov 26 17:56:01 np0005537197 chronyd[65668]: Loaded seccomp filter (level 2)
Nov 26 17:56:01 np0005537197 systemd[1]: Started NTP client/server.
Nov 26 17:56:02 np0005537197 systemd[1]: session-14.scope: Deactivated successfully.
Nov 26 17:56:02 np0005537197 systemd[1]: session-14.scope: Consumed 29.750s CPU time.
Nov 26 17:56:02 np0005537197 systemd-logind[819]: Session 14 logged out. Waiting for processes to exit.
Nov 26 17:56:02 np0005537197 systemd-logind[819]: Removed session 14.
Nov 26 17:56:07 np0005537197 systemd-logind[819]: New session 15 of user zuul.
Nov 26 17:56:07 np0005537197 systemd[1]: Started Session 15 of User zuul.
Nov 26 17:56:08 np0005537197 python3.9[65848]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:56:10 np0005537197 python3.9[66004]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:10 np0005537197 python3.9[66179]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:11 np0005537197 python3.9[66257]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.vxl1m91e recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:12 np0005537197 python3.9[66409]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:13 np0005537197 python3.9[66532]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197771.8564956-61-249537891886516/.source _original_basename=.p05e21le follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:14 np0005537197 python3.9[66684]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:56:14 np0005537197 python3.9[66836]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:15 np0005537197 python3.9[66959]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764197774.3424182-85-157007003701769/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:56:16 np0005537197 python3.9[67111]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:16 np0005537197 python3.9[67234]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764197775.7781167-85-43499627085324/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:56:17 np0005537197 python3.9[67386]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:18 np0005537197 python3.9[67538]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:19 np0005537197 python3.9[67661]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197777.9989681-122-111473062713752/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:19 np0005537197 python3.9[67813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:20 np0005537197 python3.9[67936]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197779.282092-137-60319246564375/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:21 np0005537197 python3.9[68088]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:56:21 np0005537197 systemd[1]: Reloading.
Nov 26 17:56:21 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:56:21 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:56:22 np0005537197 systemd[1]: Reloading.
Nov 26 17:56:22 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:56:22 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:56:22 np0005537197 systemd[1]: Starting EDPM Container Shutdown...
Nov 26 17:56:22 np0005537197 systemd[1]: Finished EDPM Container Shutdown.
Nov 26 17:56:23 np0005537197 python3.9[68316]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:23 np0005537197 python3.9[68439]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197782.5283482-160-70108715549469/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:24 np0005537197 python3.9[68591]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:25 np0005537197 python3.9[68714]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197784.1534576-175-174576160889877/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:26 np0005537197 python3.9[68866]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:56:26 np0005537197 systemd[1]: Reloading.
Nov 26 17:56:26 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:56:26 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:56:26 np0005537197 systemd[1]: Reloading.
Nov 26 17:56:26 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:56:26 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:56:26 np0005537197 systemd[1]: Starting Create netns directory...
Nov 26 17:56:26 np0005537197 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 17:56:26 np0005537197 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 17:56:26 np0005537197 systemd[1]: Finished Create netns directory.
Nov 26 17:56:27 np0005537197 python3.9[69093]: ansible-ansible.builtin.service_facts Invoked
Nov 26 17:56:28 np0005537197 network[69110]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 17:56:28 np0005537197 network[69111]: 'network-scripts' will be removed from distribution in near future.
Nov 26 17:56:28 np0005537197 network[69112]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 17:56:33 np0005537197 python3.9[69374]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:56:33 np0005537197 systemd[1]: Reloading.
Nov 26 17:56:33 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:56:33 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:56:33 np0005537197 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 26 17:56:33 np0005537197 iptables.init[69415]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 26 17:56:33 np0005537197 iptables.init[69415]: iptables: Flushing firewall rules: [  OK  ]
Nov 26 17:56:33 np0005537197 systemd[1]: iptables.service: Deactivated successfully.
Nov 26 17:56:33 np0005537197 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 26 17:56:35 np0005537197 python3.9[69611]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:56:36 np0005537197 python3.9[69765]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:56:36 np0005537197 systemd[1]: Reloading.
Nov 26 17:56:36 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:56:36 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:56:36 np0005537197 systemd[1]: Starting Netfilter Tables...
Nov 26 17:56:36 np0005537197 systemd[1]: Finished Netfilter Tables.
Nov 26 17:56:37 np0005537197 python3.9[69957]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:56:38 np0005537197 python3.9[70110]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:39 np0005537197 python3.9[70235]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197798.0620701-244-134249876505805/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:40 np0005537197 python3.9[70388]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:56:40 np0005537197 systemd[1]: Reloading OpenSSH server daemon...
Nov 26 17:56:40 np0005537197 systemd[1]: Reloaded OpenSSH server daemon.
Nov 26 17:56:41 np0005537197 python3.9[70544]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:41 np0005537197 python3.9[70696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:42 np0005537197 python3.9[70819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197801.302414-275-210466808119510/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:43 np0005537197 python3.9[70971]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 26 17:56:43 np0005537197 systemd[1]: Starting Time & Date Service...
Nov 26 17:56:43 np0005537197 systemd[1]: Started Time & Date Service.
Nov 26 17:56:44 np0005537197 python3.9[71127]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:45 np0005537197 python3.9[71279]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:46 np0005537197 python3.9[71402]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197804.9196863-310-159763155317433/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:47 np0005537197 python3.9[71554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:47 np0005537197 python3.9[71677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197806.475948-325-111570938219917/.source.yaml _original_basename=.kskpnger follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:48 np0005537197 python3.9[71829]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:49 np0005537197 python3.9[71952]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197807.8811703-340-31455641665067/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:49 np0005537197 python3.9[72104]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:56:50 np0005537197 python3.9[72257]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:56:51 np0005537197 python3[72410]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 17:56:52 np0005537197 python3.9[72562]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:53 np0005537197 python3.9[72685]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197812.1309278-379-195270248129258/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:54 np0005537197 python3.9[72837]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:55 np0005537197 python3.9[72960]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197813.7435446-394-95024169724093/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:55 np0005537197 python3.9[73112]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:56 np0005537197 python3.9[73235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197815.253974-409-153706991798668/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:57 np0005537197 python3.9[73387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:58 np0005537197 python3.9[73510]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197816.7578156-424-25869368925471/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:56:58 np0005537197 python3.9[73662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:56:59 np0005537197 python3.9[73785]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197818.2444632-439-32905795288365/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:00 np0005537197 python3.9[73937]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:01 np0005537197 python3.9[74089]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:57:02 np0005537197 python3.9[74248]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:03 np0005537197 python3.9[74401]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:04 np0005537197 python3.9[74553]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:05 np0005537197 python3.9[74705]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 17:57:05 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 17:57:05 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 17:57:05 np0005537197 python3.9[74859]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 26 17:57:06 np0005537197 systemd[1]: session-15.scope: Deactivated successfully.
Nov 26 17:57:06 np0005537197 systemd[1]: session-15.scope: Consumed 42.683s CPU time.
Nov 26 17:57:06 np0005537197 systemd-logind[819]: Session 15 logged out. Waiting for processes to exit.
Nov 26 17:57:06 np0005537197 systemd-logind[819]: Removed session 15.
Nov 26 17:57:11 np0005537197 systemd-logind[819]: New session 16 of user zuul.
Nov 26 17:57:11 np0005537197 systemd[1]: Started Session 16 of User zuul.
Nov 26 17:57:12 np0005537197 python3.9[75040]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 26 17:57:13 np0005537197 python3.9[75192]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:57:13 np0005537197 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 26 17:57:15 np0005537197 python3.9[75346]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:57:16 np0005537197 python3.9[75498]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC5OFnJbMeGfdGVCUG0vcgB+Cm8GZbXO5kPdgJevzxEsZZVvdX2RnEcKOzzLtc46eNX6U/aWaL44hb7y8KIp2EJw4/XAHxY+kcMblZ83cEynZ+yoMNl0vHkOUXiYdd0SgmIJxpWSG76IBd+Nk+xGAu8YY5GJeW7i7BlUYIKjWGRfrRVwkZIyzAH7CYaveDKOna/Y3KOL15iXv0peP+LdvNnwtEsuVPHBadgUQy44onrp0LJrBNiaqG0zJq0Yfte4D2MrlU5IIWAI13g0h/xG2m4HiON3x0gFL4R3BnxeoAeXyfN7aQml/mUCL8kqHoZa3xlmdoa3k9OwCRwlSzLeDs7M2u+FGWwTBTVM2mU7SVpue+7Lzf1T9jLI12YM0pWKkwJjojJxl8ewaXN+vceRFf1NjQ3/ByNNnJ1kyNLQPUL9sNWm3PbhDSRwoECTg4He3UbwzbNcBPa6z8vz/lvaxDpQ3pIgIbvorcI3U9UrWSYiNW+nX0i+eTtiRK4vqYKCyc=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOmr8gMiXEZSr/i30f/vns6km3RLSyAewSL+Zlhrfngv#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBKjG0ulw4pY+mnhR57lwIHrH8V4F4ADWKACLqK2Efl6RBfnWit57XqjuTvIShDfaq1spRSY5eoen5tcgkywZG0=#012 create=True mode=0644 path=/tmp/ansible.kejl2wcm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:17 np0005537197 python3.9[75650]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.kejl2wcm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:57:18 np0005537197 python3.9[75804]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.kejl2wcm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:18 np0005537197 systemd[1]: session-16.scope: Deactivated successfully.
Nov 26 17:57:18 np0005537197 systemd[1]: session-16.scope: Consumed 4.159s CPU time.
Nov 26 17:57:18 np0005537197 systemd-logind[819]: Session 16 logged out. Waiting for processes to exit.
Nov 26 17:57:18 np0005537197 systemd-logind[819]: Removed session 16.
Nov 26 17:57:24 np0005537197 systemd-logind[819]: New session 17 of user zuul.
Nov 26 17:57:24 np0005537197 systemd[1]: Started Session 17 of User zuul.
Nov 26 17:57:25 np0005537197 python3.9[75982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:57:26 np0005537197 python3.9[76138]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 17:57:27 np0005537197 python3.9[76292]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 17:57:28 np0005537197 python3.9[76445]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:57:29 np0005537197 python3.9[76598]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:57:30 np0005537197 python3.9[76752]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:57:31 np0005537197 python3.9[76907]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:32 np0005537197 systemd[1]: session-17.scope: Deactivated successfully.
Nov 26 17:57:32 np0005537197 systemd[1]: session-17.scope: Consumed 5.519s CPU time.
Nov 26 17:57:32 np0005537197 systemd-logind[819]: Session 17 logged out. Waiting for processes to exit.
Nov 26 17:57:32 np0005537197 systemd-logind[819]: Removed session 17.
Nov 26 17:57:37 np0005537197 systemd-logind[819]: New session 18 of user zuul.
Nov 26 17:57:37 np0005537197 systemd[1]: Started Session 18 of User zuul.
Nov 26 17:57:38 np0005537197 python3.9[77085]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:57:40 np0005537197 python3.9[77241]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:57:41 np0005537197 python3.9[77325]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 26 17:57:43 np0005537197 python3.9[77476]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:57:44 np0005537197 python3.9[77627]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 17:57:45 np0005537197 python3.9[77777]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:57:46 np0005537197 python3.9[77927]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:57:46 np0005537197 systemd[1]: session-18.scope: Deactivated successfully.
Nov 26 17:57:46 np0005537197 systemd[1]: session-18.scope: Consumed 6.608s CPU time.
Nov 26 17:57:46 np0005537197 systemd-logind[819]: Session 18 logged out. Waiting for processes to exit.
Nov 26 17:57:46 np0005537197 systemd-logind[819]: Removed session 18.
Nov 26 17:57:52 np0005537197 systemd-logind[819]: New session 19 of user zuul.
Nov 26 17:57:52 np0005537197 systemd[1]: Started Session 19 of User zuul.
Nov 26 17:57:54 np0005537197 python3.9[78105]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:57:56 np0005537197 python3.9[78261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:57:57 np0005537197 python3.9[78413]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:57:57 np0005537197 python3.9[78565]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:57:58 np0005537197 python3.9[78688]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197877.22981-65-216701043447695/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=3a888c9ffeb510624f0bd9e7718ff24d6e6e5118 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:57:59 np0005537197 python3.9[78840]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:57:59 np0005537197 python3.9[78963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197878.7624836-65-84552718579780/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c017ae4b0ca5665d4cf3b8e099cd9bb1482e0c2a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:00 np0005537197 python3.9[79115]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:01 np0005537197 python3.9[79238]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197880.1351752-65-133146494349524/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6293b47299a002fd02e5d542cff3b48e5d97d2ac backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:02 np0005537197 python3.9[79390]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:03 np0005537197 python3.9[79542]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:04 np0005537197 python3.9[79694]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:05 np0005537197 python3.9[79817]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197883.6729386-124-250817934821802/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c03264d10d511a2937fd862e644918b3650a7e19 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:05 np0005537197 python3.9[79969]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:06 np0005537197 python3.9[80092]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197885.271297-124-29888203784477/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=c017ae4b0ca5665d4cf3b8e099cd9bb1482e0c2a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:07 np0005537197 python3.9[80244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:08 np0005537197 python3.9[80367]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197886.76143-124-154291235803741/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3b17c3d3b6e6f908b6a7de938bf12df1467f1481 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:08 np0005537197 python3.9[80519]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:09 np0005537197 python3.9[80671]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:10 np0005537197 python3.9[80823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:11 np0005537197 python3.9[80946]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197889.9453924-183-175040472755356/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=01ca757b7b4de00c9f15cdc7a8d3db5b0863aa30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:11 np0005537197 chronyd[65668]: Selected source 216.232.132.102 (pool.ntp.org)
Nov 26 17:58:11 np0005537197 python3.9[81098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:12 np0005537197 python3.9[81221]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197891.3013575-183-244981390494955/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d5fcc0e752d2aff51bc3bda39b23d32a84bf1036 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:13 np0005537197 python3.9[81373]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:13 np0005537197 python3.9[81496]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197892.5916321-183-27480133081219/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=be600fe87c5d416c99e205222b0a305d46e6dfa1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:14 np0005537197 python3.9[81648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:15 np0005537197 python3.9[81800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:16 np0005537197 python3.9[81952]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:17 np0005537197 python3.9[82075]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197895.9304826-242-36542778444560/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b6e2d4cfe32efed2e968d9cad73fff239f91591c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:17 np0005537197 python3.9[82227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:18 np0005537197 python3.9[82350]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197897.342726-242-104069195630417/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=6c9422b0dd28b430bdcd21073f03069ff357e4cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:19 np0005537197 python3.9[82502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:19 np0005537197 python3.9[82625]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197898.733312-242-150896695796670/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=bffd8abaad40b64d781db4a0c8cd9ca193db6cd3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:20 np0005537197 python3.9[82777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:21 np0005537197 python3.9[82929]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:22 np0005537197 python3.9[83081]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:23 np0005537197 python3.9[83204]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197901.9338782-301-204366557956243/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=706e71d922c344c0816abf82b1def84e311f1231 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:24 np0005537197 python3.9[83356]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:24 np0005537197 python3.9[83479]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197903.6064715-301-18019762243125/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=d5fcc0e752d2aff51bc3bda39b23d32a84bf1036 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:25 np0005537197 python3.9[83631]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:26 np0005537197 python3.9[83754]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197905.0245128-301-121613109547032/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=5ab972649bbfbcd518d7d839312ef3c7d69e2edf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:27 np0005537197 python3.9[83906]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:28 np0005537197 python3.9[84058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:29 np0005537197 python3.9[84181]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197908.116899-369-34042219828245/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:30 np0005537197 python3.9[84333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:30 np0005537197 python3.9[84485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:31 np0005537197 python3.9[84608]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197910.424394-393-85559050676080/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:32 np0005537197 python3.9[84760]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:33 np0005537197 python3.9[84912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:34 np0005537197 python3.9[85035]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197912.7172062-417-51350907005058/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:34 np0005537197 python3.9[85187]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:35 np0005537197 python3.9[85339]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:36 np0005537197 python3.9[85462]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197915.1957998-441-41709138049927/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:37 np0005537197 python3.9[85614]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:38 np0005537197 python3.9[85766]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:38 np0005537197 python3.9[85889]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197917.4825132-465-119211067130173/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:39 np0005537197 python3.9[86041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:40 np0005537197 python3.9[86193]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:41 np0005537197 python3.9[86316]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197919.8544512-489-245407626283225/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:42 np0005537197 python3.9[86468]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:43 np0005537197 python3.9[86620]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:43 np0005537197 python3.9[86743]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197922.4210389-513-53132317392666/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:44 np0005537197 python3.9[86895]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:45 np0005537197 python3.9[87047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:58:46 np0005537197 python3.9[87170]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197924.8912864-537-214959423268330/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=228bd9abec6d1d59346d137ac91d935aec1bafa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:58:46 np0005537197 systemd[1]: session-19.scope: Deactivated successfully.
Nov 26 17:58:46 np0005537197 systemd[1]: session-19.scope: Consumed 41.550s CPU time.
Nov 26 17:58:46 np0005537197 systemd-logind[819]: Session 19 logged out. Waiting for processes to exit.
Nov 26 17:58:46 np0005537197 systemd-logind[819]: Removed session 19.
Nov 26 17:58:52 np0005537197 systemd-logind[819]: New session 20 of user zuul.
Nov 26 17:58:52 np0005537197 systemd[1]: Started Session 20 of User zuul.
Nov 26 17:58:54 np0005537197 python3.9[87349]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:58:55 np0005537197 python3.9[87505]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:56 np0005537197 python3.9[87657]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:58:56 np0005537197 python3.9[87807]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:58:57 np0005537197 python3.9[87959]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 17:58:59 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 26 17:59:00 np0005537197 python3.9[88115]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 17:59:01 np0005537197 python3.9[88199]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 17:59:03 np0005537197 python3.9[88352]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 17:59:04 np0005537197 python3[88507]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 26 17:59:05 np0005537197 python3.9[88660]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:06 np0005537197 python3.9[88812]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:07 np0005537197 python3.9[88890]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:08 np0005537197 python3.9[89042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:08 np0005537197 python3.9[89120]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.g45wtoeq recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:09 np0005537197 python3.9[89272]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:09 np0005537197 python3.9[89350]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:10 np0005537197 python3.9[89502]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:59:11 np0005537197 python3[89655]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 17:59:12 np0005537197 python3.9[89807]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:13 np0005537197 python3.9[89932]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197952.1924706-157-102511827517739/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:14 np0005537197 python3.9[90084]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:15 np0005537197 python3.9[90209]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197953.81164-172-197146821264820/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:16 np0005537197 python3.9[90361]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:16 np0005537197 python3.9[90486]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197955.5622382-187-120550445475348/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:17 np0005537197 python3.9[90638]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:18 np0005537197 python3.9[90763]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197957.2252128-202-17277297345179/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:19 np0005537197 python3.9[90915]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:20 np0005537197 python3.9[91040]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764197958.946903-217-272142394077900/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:21 np0005537197 python3.9[91192]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:22 np0005537197 python3.9[91344]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:59:23 np0005537197 python3.9[91499]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:24 np0005537197 python3.9[91651]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:59:25 np0005537197 python3.9[91804]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:59:26 np0005537197 python3.9[91958]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:59:26 np0005537197 python3.9[92113]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:28 np0005537197 python3.9[92263]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 17:59:29 np0005537197 python3.9[92416]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:59:29 np0005537197 ovs-vsctl[92417]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 26 17:59:30 np0005537197 python3.9[92569]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:59:31 np0005537197 python3.9[92724]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 17:59:31 np0005537197 ovs-vsctl[92725]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 26 17:59:31 np0005537197 python3.9[92875]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:59:32 np0005537197 python3.9[93029]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:59:33 np0005537197 python3.9[93181]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:34 np0005537197 python3.9[93259]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:59:34 np0005537197 python3.9[93411]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:35 np0005537197 python3.9[93489]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:59:36 np0005537197 python3.9[93641]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:36 np0005537197 python3.9[93793]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:37 np0005537197 python3.9[93871]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:38 np0005537197 python3.9[94023]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:38 np0005537197 python3.9[94102]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:39 np0005537197 python3.9[94254]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:59:39 np0005537197 systemd[1]: Reloading.
Nov 26 17:59:39 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:59:39 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:59:40 np0005537197 python3.9[94443]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:41 np0005537197 python3.9[94521]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:42 np0005537197 python3.9[94673]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:42 np0005537197 python3.9[94751]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:43 np0005537197 python3.9[94903]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 17:59:43 np0005537197 systemd[1]: Reloading.
Nov 26 17:59:43 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 17:59:43 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 17:59:43 np0005537197 systemd[1]: Starting Create netns directory...
Nov 26 17:59:43 np0005537197 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 17:59:43 np0005537197 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 17:59:43 np0005537197 systemd[1]: Finished Create netns directory.
Nov 26 17:59:44 np0005537197 python3.9[95097]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:59:45 np0005537197 python3.9[95249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:46 np0005537197 python3.9[95372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764197985.0239346-468-140581819903650/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:59:47 np0005537197 python3.9[95524]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 17:59:48 np0005537197 python3.9[95676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 17:59:48 np0005537197 python3.9[95799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764197987.68268-493-45401784464408/.source.json _original_basename=.ykjtpymc follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:49 np0005537197 python3.9[95951]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:52 np0005537197 python3.9[96378]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 26 17:59:53 np0005537197 python3.9[96530]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 17:59:54 np0005537197 python3.9[96682]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 17:59:54 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:59:55 np0005537197 python3[96846]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 17:59:55 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:59:56 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:59:56 np0005537197 podman[96882]: 2025-11-26 22:59:56.13560778 +0000 UTC m=+0.059931233 container create 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 17:59:56 np0005537197 podman[96882]: 2025-11-26 22:59:56.105523876 +0000 UTC m=+0.029847379 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 26 17:59:56 np0005537197 python3[96846]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 26 17:59:56 np0005537197 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 26 17:59:57 np0005537197 python3.9[97072]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:59:58 np0005537197 python3.9[97226]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 17:59:58 np0005537197 python3.9[97302]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 17:59:59 np0005537197 python3.9[97453]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764197998.9093666-581-162492475470101/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:00 np0005537197 python3.9[97529]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:00:00 np0005537197 systemd[1]: Reloading.
Nov 26 18:00:00 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:00:00 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:00:01 np0005537197 python3.9[97640]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:00:01 np0005537197 systemd[1]: Reloading.
Nov 26 18:00:01 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:00:01 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:00:01 np0005537197 systemd[1]: Starting ovn_controller container...
Nov 26 18:00:01 np0005537197 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 26 18:00:01 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:00:01 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39cad09456736cb70fd882881bc66234f5c65f851262efc2cc91696cc70e296e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 26 18:00:01 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.
Nov 26 18:00:01 np0005537197 podman[97681]: 2025-11-26 23:00:01.658212443 +0000 UTC m=+0.123292117 container init 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 18:00:01 np0005537197 ovn_controller[97697]: + sudo -E kolla_set_configs
Nov 26 18:00:01 np0005537197 podman[97681]: 2025-11-26 23:00:01.686261644 +0000 UTC m=+0.151341288 container start 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 26 18:00:01 np0005537197 edpm-start-podman-container[97681]: ovn_controller
Nov 26 18:00:01 np0005537197 systemd[1]: Created slice User Slice of UID 0.
Nov 26 18:00:01 np0005537197 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 26 18:00:01 np0005537197 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 26 18:00:01 np0005537197 systemd[1]: Starting User Manager for UID 0...
Nov 26 18:00:01 np0005537197 edpm-start-podman-container[97680]: Creating additional drop-in dependency for "ovn_controller" (3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0)
Nov 26 18:00:01 np0005537197 podman[97704]: 2025-11-26 23:00:01.758301276 +0000 UTC m=+0.061270169 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 18:00:01 np0005537197 systemd[1]: 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0-555c3960ffdce4ea.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:00:01 np0005537197 systemd[1]: 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0-555c3960ffdce4ea.service: Failed with result 'exit-code'.
Nov 26 18:00:01 np0005537197 systemd[1]: Reloading.
Nov 26 18:00:01 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:00:01 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:00:01 np0005537197 systemd[97734]: Queued start job for default target Main User Target.
Nov 26 18:00:01 np0005537197 systemd[97734]: Created slice User Application Slice.
Nov 26 18:00:01 np0005537197 systemd[97734]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 26 18:00:01 np0005537197 systemd[97734]: Started Daily Cleanup of User's Temporary Directories.
Nov 26 18:00:01 np0005537197 systemd[97734]: Reached target Paths.
Nov 26 18:00:01 np0005537197 systemd[97734]: Reached target Timers.
Nov 26 18:00:01 np0005537197 systemd[97734]: Starting D-Bus User Message Bus Socket...
Nov 26 18:00:01 np0005537197 systemd[97734]: Starting Create User's Volatile Files and Directories...
Nov 26 18:00:01 np0005537197 systemd[97734]: Finished Create User's Volatile Files and Directories.
Nov 26 18:00:01 np0005537197 systemd[97734]: Listening on D-Bus User Message Bus Socket.
Nov 26 18:00:01 np0005537197 systemd[97734]: Reached target Sockets.
Nov 26 18:00:01 np0005537197 systemd[97734]: Reached target Basic System.
Nov 26 18:00:01 np0005537197 systemd[97734]: Reached target Main User Target.
Nov 26 18:00:01 np0005537197 systemd[97734]: Startup finished in 123ms.
Nov 26 18:00:01 np0005537197 systemd[1]: Started User Manager for UID 0.
Nov 26 18:00:01 np0005537197 systemd[1]: Started ovn_controller container.
Nov 26 18:00:01 np0005537197 systemd[1]: Started Session c1 of User root.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: INFO:__main__:Validating config file
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: INFO:__main__:Writing out command to execute
Nov 26 18:00:02 np0005537197 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: ++ cat /run_command
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + ARGS=
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + sudo kolla_copy_cacerts
Nov 26 18:00:02 np0005537197 systemd[1]: Started Session c2 of User root.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + [[ ! -n '' ]]
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + . kolla_extend_start
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + umask 0022
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 26 18:00:02 np0005537197 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1472] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1477] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1487] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1491] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1494] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 18:00:02 np0005537197 kernel: br-int: entered promiscuous mode
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 18:00:02 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:02Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1657] manager: (ovn-e01c02-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 26 18:00:02 np0005537197 kernel: genev_sys_6081: entered promiscuous mode
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1810] device (genev_sys_6081): carrier: link connected
Nov 26 18:00:02 np0005537197 NetworkManager[56227]: <info>  [1764198002.1813] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Nov 26 18:00:02 np0005537197 systemd-udevd[97853]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 18:00:02 np0005537197 systemd-udevd[97858]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 18:00:02 np0005537197 python3.9[97964]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:00:02 np0005537197 ovs-vsctl[97965]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 26 18:00:03 np0005537197 python3.9[98117]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:00:03 np0005537197 ovs-vsctl[98119]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 26 18:00:04 np0005537197 python3.9[98272]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:00:04 np0005537197 ovs-vsctl[98273]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 26 18:00:05 np0005537197 systemd[1]: session-20.scope: Deactivated successfully.
Nov 26 18:00:05 np0005537197 systemd[1]: session-20.scope: Consumed 53.438s CPU time.
Nov 26 18:00:05 np0005537197 systemd-logind[819]: Session 20 logged out. Waiting for processes to exit.
Nov 26 18:00:05 np0005537197 systemd-logind[819]: Removed session 20.
Nov 26 18:00:11 np0005537197 systemd-logind[819]: New session 22 of user zuul.
Nov 26 18:00:11 np0005537197 systemd[1]: Started Session 22 of User zuul.
Nov 26 18:00:12 np0005537197 systemd[1]: Stopping User Manager for UID 0...
Nov 26 18:00:12 np0005537197 systemd[97734]: Activating special unit Exit the Session...
Nov 26 18:00:12 np0005537197 systemd[97734]: Stopped target Main User Target.
Nov 26 18:00:12 np0005537197 systemd[97734]: Stopped target Basic System.
Nov 26 18:00:12 np0005537197 systemd[97734]: Stopped target Paths.
Nov 26 18:00:12 np0005537197 systemd[97734]: Stopped target Sockets.
Nov 26 18:00:12 np0005537197 systemd[97734]: Stopped target Timers.
Nov 26 18:00:12 np0005537197 systemd[97734]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 26 18:00:12 np0005537197 systemd[97734]: Closed D-Bus User Message Bus Socket.
Nov 26 18:00:12 np0005537197 systemd[97734]: Stopped Create User's Volatile Files and Directories.
Nov 26 18:00:12 np0005537197 systemd[97734]: Removed slice User Application Slice.
Nov 26 18:00:12 np0005537197 systemd[97734]: Reached target Shutdown.
Nov 26 18:00:12 np0005537197 systemd[97734]: Finished Exit the Session.
Nov 26 18:00:12 np0005537197 systemd[97734]: Reached target Exit the Session.
Nov 26 18:00:12 np0005537197 systemd[1]: user@0.service: Deactivated successfully.
Nov 26 18:00:12 np0005537197 systemd[1]: Stopped User Manager for UID 0.
Nov 26 18:00:12 np0005537197 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 26 18:00:12 np0005537197 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 26 18:00:12 np0005537197 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 26 18:00:12 np0005537197 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 26 18:00:12 np0005537197 systemd[1]: Removed slice User Slice of UID 0.
Nov 26 18:00:13 np0005537197 python3.9[98454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 18:00:14 np0005537197 python3.9[98610]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:15 np0005537197 python3.9[98762]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:16 np0005537197 python3.9[98914]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:16 np0005537197 python3.9[99066]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:17 np0005537197 python3.9[99218]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:18 np0005537197 python3.9[99368]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 18:00:19 np0005537197 python3.9[99520]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 26 18:00:21 np0005537197 python3.9[99671]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:22 np0005537197 python3.9[99792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198020.5761387-86-240915719768267/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:22 np0005537197 python3.9[99942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:23 np0005537197 python3.9[100063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198022.4063604-101-273639586010388/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:24 np0005537197 python3.9[100215]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 18:00:25 np0005537197 python3.9[100299]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 18:00:28 np0005537197 python3.9[100452]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 18:00:29 np0005537197 python3.9[100605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:29 np0005537197 python3.9[100726]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198028.4993663-138-251228944522115/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:30 np0005537197 python3.9[100876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:31 np0005537197 python3.9[100997]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198029.9095318-138-194625803840071/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:32 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:32Z|00025|memory|INFO|16128 kB peak resident set size after 30.0 seconds
Nov 26 18:00:32 np0005537197 ovn_controller[97697]: 2025-11-26T23:00:32Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 26 18:00:32 np0005537197 podman[101121]: 2025-11-26 23:00:32.217838443 +0000 UTC m=+0.135410174 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller)
Nov 26 18:00:32 np0005537197 python3.9[101158]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:32 np0005537197 python3.9[101293]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198031.7350116-182-108234550044610/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:33 np0005537197 python3.9[101443]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:34 np0005537197 python3.9[101564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198033.0981028-182-202510419040512/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:35 np0005537197 python3.9[101714]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:00:36 np0005537197 python3.9[101868]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:36 np0005537197 python3.9[102020]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:37 np0005537197 python3.9[102098]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:38 np0005537197 python3.9[102250]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:39 np0005537197 python3.9[102328]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:40 np0005537197 python3.9[102480]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:42 np0005537197 python3.9[102632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:43 np0005537197 python3.9[102710]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:43 np0005537197 python3.9[102862]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:44 np0005537197 python3.9[102940]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:45 np0005537197 python3.9[103092]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:00:45 np0005537197 systemd[1]: Reloading.
Nov 26 18:00:45 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:00:45 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:00:46 np0005537197 python3.9[103281]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:47 np0005537197 python3.9[103359]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:47 np0005537197 python3.9[103511]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:48 np0005537197 python3.9[103589]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:49 np0005537197 python3.9[103741]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:00:49 np0005537197 systemd[1]: Reloading.
Nov 26 18:00:49 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:00:49 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:00:49 np0005537197 systemd[1]: Starting Create netns directory...
Nov 26 18:00:49 np0005537197 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 18:00:49 np0005537197 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 18:00:49 np0005537197 systemd[1]: Finished Create netns directory.
Nov 26 18:00:50 np0005537197 python3.9[103935]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:51 np0005537197 python3.9[104087]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:52 np0005537197 python3.9[104210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198051.1028967-333-224361371896263/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:53 np0005537197 python3.9[104362]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:00:54 np0005537197 python3.9[104514]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:00:54 np0005537197 python3.9[104637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198053.498646-358-153431428228934/.source.json _original_basename=.wp7vzyrx follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:55 np0005537197 python3.9[104789]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:00:58 np0005537197 python3.9[105216]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 26 18:00:59 np0005537197 python3.9[105368]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:01:00 np0005537197 python3.9[105520]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 18:01:02 np0005537197 python3[105711]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:01:02 np0005537197 podman[105749]: 2025-11-26 23:01:02.517847304 +0000 UTC m=+0.050404935 container create b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Nov 26 18:01:02 np0005537197 podman[105749]: 2025-11-26 23:01:02.49221412 +0000 UTC m=+0.024771761 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 18:01:02 np0005537197 python3[105711]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 18:01:02 np0005537197 podman[105788]: 2025-11-26 23:01:02.801189776 +0000 UTC m=+0.094017332 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 26 18:01:03 np0005537197 python3.9[105966]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:01:04 np0005537197 python3.9[106120]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:04 np0005537197 python3.9[106196]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:01:05 np0005537197 python3.9[106347]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198064.874622-446-36536015753444/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:06 np0005537197 python3.9[106423]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:01:06 np0005537197 systemd[1]: Reloading.
Nov 26 18:01:06 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:01:06 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:01:07 np0005537197 python3.9[106534]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:07 np0005537197 systemd[1]: Reloading.
Nov 26 18:01:07 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:01:07 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:01:07 np0005537197 systemd[1]: Starting ovn_metadata_agent container...
Nov 26 18:01:07 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:01:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b832bcc762cbde09200eb4657a6bb7f3d026bf0368f2ef399e173c24f40ffcd/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 26 18:01:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b832bcc762cbde09200eb4657a6bb7f3d026bf0368f2ef399e173c24f40ffcd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 18:01:07 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.
Nov 26 18:01:07 np0005537197 podman[106574]: 2025-11-26 23:01:07.689831967 +0000 UTC m=+0.194020666 container init b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + sudo -E kolla_set_configs
Nov 26 18:01:07 np0005537197 podman[106574]: 2025-11-26 23:01:07.737807521 +0000 UTC m=+0.241996170 container start b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 18:01:07 np0005537197 edpm-start-podman-container[106574]: ovn_metadata_agent
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Validating config file
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Copying service configuration files
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Writing out command to execute
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: ++ cat /run_command
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + CMD=neutron-ovn-metadata-agent
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + ARGS=
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + sudo kolla_copy_cacerts
Nov 26 18:01:07 np0005537197 podman[106597]: 2025-11-26 23:01:07.840380031 +0000 UTC m=+0.085415022 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 26 18:01:07 np0005537197 edpm-start-podman-container[106573]: Creating additional drop-in dependency for "ovn_metadata_agent" (b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b)
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + [[ ! -n '' ]]
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + . kolla_extend_start
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: Running command: 'neutron-ovn-metadata-agent'
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + umask 0022
Nov 26 18:01:07 np0005537197 ovn_metadata_agent[106590]: + exec neutron-ovn-metadata-agent
Nov 26 18:01:07 np0005537197 systemd[1]: Reloading.
Nov 26 18:01:07 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:01:08 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:01:08 np0005537197 systemd[1]: Started ovn_metadata_agent container.
Nov 26 18:01:08 np0005537197 systemd[1]: session-22.scope: Deactivated successfully.
Nov 26 18:01:08 np0005537197 systemd[1]: session-22.scope: Consumed 40.743s CPU time.
Nov 26 18:01:08 np0005537197 systemd-logind[819]: Session 22 logged out. Waiting for processes to exit.
Nov 26 18:01:08 np0005537197 systemd-logind[819]: Removed session 22.
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.564 106595 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.564 106595 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.564 106595 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.565 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.565 106595 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.565 106595 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.565 106595 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.565 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.565 106595 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.566 106595 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.567 106595 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.568 106595 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.569 106595 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.570 106595 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.571 106595 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.572 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.573 106595 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.574 106595 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.575 106595 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.576 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.577 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.578 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.579 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.580 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.581 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.582 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.583 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.584 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.585 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.586 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.587 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.588 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.589 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.590 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.591 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.592 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.593 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.594 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.595 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.596 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.597 106595 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.649 106595 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.650 106595 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.650 106595 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.650 106595 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.650 106595 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.662 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name bbd59242-3683-4df7-8a2a-12b2eb702783 (UUID: bbd59242-3683-4df7-8a2a-12b2eb702783) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.687 106595 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.688 106595 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.688 106595 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.688 106595 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.691 106595 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.700 106595 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.707 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'bbd59242-3683-4df7-8a2a-12b2eb702783'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], external_ids={}, name=bbd59242-3683-4df7-8a2a-12b2eb702783, nb_cfg_timestamp=1764198010169, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.709 106595 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f0819fe2160>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.709 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.710 106595 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.710 106595 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.710 106595 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.715 106595 DEBUG oslo_service.service [-] Started child 106703 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.718 106703 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-4044862'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.719 106595 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp4ketndkw/privsep.sock']#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.740 106703 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.740 106703 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.740 106703 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.743 106703 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.749 106703 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 26 18:01:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:09.756 106703 INFO eventlet.wsgi.server [-] (106703) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 26 18:01:10 np0005537197 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.375 106595 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.376 106595 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp4ketndkw/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.265 106708 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.271 106708 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.275 106708 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.276 106708 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106708#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.380 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[42dc43dd-1331-440a-b256-11bc8e03113e]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.848 106708 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.848 106708 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:01:10 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:10.848 106708 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.351 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[7c98a579-d44b-499c-9951-c76648c7c2fc]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.353 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, column=external_ids, values=({'neutron:ovn-metadata-id': '6f402d5d-200e-50a3-baef-8b6c513a8f4f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.364 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.370 106595 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.370 106595 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.370 106595 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.370 106595 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.370 106595 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.370 106595 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.371 106595 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.371 106595 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.371 106595 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.371 106595 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.371 106595 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.371 106595 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.371 106595 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.372 106595 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.372 106595 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.372 106595 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.372 106595 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.372 106595 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.372 106595 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.372 106595 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.373 106595 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.374 106595 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.375 106595 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.375 106595 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.375 106595 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.375 106595 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.375 106595 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.375 106595 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.375 106595 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.376 106595 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.377 106595 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.378 106595 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.379 106595 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.380 106595 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.381 106595 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.382 106595 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.383 106595 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.384 106595 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.385 106595 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.386 106595 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.387 106595 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.388 106595 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.389 106595 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.390 106595 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.391 106595 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.392 106595 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.393 106595 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.394 106595 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.395 106595 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.396 106595 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.397 106595 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.398 106595 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.399 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.400 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.401 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.402 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.403 106595 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.403 106595 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.403 106595 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.403 106595 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.403 106595 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:01:11 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:01:11.403 106595 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 18:01:14 np0005537197 systemd-logind[819]: New session 23 of user zuul.
Nov 26 18:01:14 np0005537197 systemd[1]: Started Session 23 of User zuul.
Nov 26 18:01:15 np0005537197 python3.9[106866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 18:01:16 np0005537197 python3.9[107022]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:18 np0005537197 python3.9[107186]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:01:18 np0005537197 systemd[1]: Reloading.
Nov 26 18:01:18 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:01:18 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:01:19 np0005537197 python3.9[107371]: ansible-ansible.builtin.service_facts Invoked
Nov 26 18:01:19 np0005537197 network[107388]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 18:01:19 np0005537197 network[107389]: 'network-scripts' will be removed from distribution in near future.
Nov 26 18:01:19 np0005537197 network[107390]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 18:01:24 np0005537197 python3.9[107651]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:24 np0005537197 python3.9[107804]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:25 np0005537197 python3.9[107957]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:26 np0005537197 python3.9[108110]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:27 np0005537197 python3.9[108263]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:28 np0005537197 python3.9[108416]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:29 np0005537197 python3.9[108569]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:01:30 np0005537197 python3.9[108722]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:31 np0005537197 python3.9[108874]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:32 np0005537197 python3.9[109026]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:32 np0005537197 python3.9[109178]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:33 np0005537197 podman[109302]: 2025-11-26 23:01:33.561883211 +0000 UTC m=+0.106177595 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 18:01:33 np0005537197 python3.9[109350]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:34 np0005537197 python3.9[109508]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:35 np0005537197 python3.9[109660]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:35 np0005537197 python3.9[109812]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:36 np0005537197 python3.9[109964]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:37 np0005537197 python3.9[110116]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:38 np0005537197 podman[110240]: 2025-11-26 23:01:38.363781537 +0000 UTC m=+0.104234994 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 18:01:38 np0005537197 python3.9[110286]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:39 np0005537197 python3.9[110440]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:40 np0005537197 python3.9[110592]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:40 np0005537197 python3.9[110744]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:01:41 np0005537197 python3.9[110896]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:42 np0005537197 python3.9[111048]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 18:01:43 np0005537197 python3.9[111200]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:01:43 np0005537197 systemd[1]: Reloading.
Nov 26 18:01:43 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:01:43 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:01:45 np0005537197 python3.9[111387]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:45 np0005537197 python3.9[111540]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:46 np0005537197 python3.9[111693]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:47 np0005537197 python3.9[111846]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:48 np0005537197 python3.9[111999]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:48 np0005537197 python3.9[112152]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:49 np0005537197 python3.9[112305]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:01:50 np0005537197 python3.9[112458]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 26 18:01:51 np0005537197 python3.9[112611]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 18:01:53 np0005537197 python3.9[112769]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 18:01:54 np0005537197 python3.9[112929]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 18:01:55 np0005537197 python3.9[113013]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 18:02:03 np0005537197 podman[113075]: 2025-11-26 23:02:03.935823477 +0000 UTC m=+0.221115271 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 18:02:08 np0005537197 podman[113224]: 2025-11-26 23:02:08.773061303 +0000 UTC m=+0.074182224 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:02:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:02:09.599 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:02:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:02:09.600 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:02:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:02:09.600 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:02:21 np0005537197 kernel: SELinux:  Converting 2757 SID table entries...
Nov 26 18:02:21 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 18:02:21 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 18:02:21 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 18:02:21 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 18:02:21 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 18:02:21 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 18:02:21 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 18:02:30 np0005537197 kernel: SELinux:  Converting 2757 SID table entries...
Nov 26 18:02:30 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 18:02:30 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 18:02:30 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 18:02:30 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 18:02:30 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 18:02:30 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 18:02:30 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 18:02:34 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 26 18:02:34 np0005537197 podman[113265]: 2025-11-26 23:02:34.865545283 +0000 UTC m=+0.145080549 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 18:02:39 np0005537197 podman[113291]: 2025-11-26 23:02:39.782472511 +0000 UTC m=+0.068967135 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 26 18:03:05 np0005537197 podman[123922]: 2025-11-26 23:03:05.837693878 +0000 UTC m=+0.122906538 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 18:03:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:03:09.600 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:03:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:03:09.600 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:03:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:03:09.601 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:03:10 np0005537197 podman[126450]: 2025-11-26 23:03:10.763860402 +0000 UTC m=+0.068306538 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 18:03:29 np0005537197 kernel: SELinux:  Converting 2758 SID table entries...
Nov 26 18:03:29 np0005537197 kernel: SELinux:  policy capability network_peer_controls=1
Nov 26 18:03:29 np0005537197 kernel: SELinux:  policy capability open_perms=1
Nov 26 18:03:29 np0005537197 kernel: SELinux:  policy capability extended_socket_class=1
Nov 26 18:03:29 np0005537197 kernel: SELinux:  policy capability always_check_network=0
Nov 26 18:03:29 np0005537197 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 26 18:03:29 np0005537197 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 26 18:03:29 np0005537197 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 26 18:03:30 np0005537197 dbus-broker-launch[785]: Noticed file-system modification, trigger reload.
Nov 26 18:03:30 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 26 18:03:30 np0005537197 dbus-broker-launch[785]: Noticed file-system modification, trigger reload.
Nov 26 18:03:36 np0005537197 podman[130404]: 2025-11-26 23:03:36.948194411 +0000 UTC m=+0.185300685 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 26 18:03:38 np0005537197 systemd[1]: Stopping OpenSSH server daemon...
Nov 26 18:03:38 np0005537197 systemd[1]: sshd.service: Deactivated successfully.
Nov 26 18:03:38 np0005537197 systemd[1]: Stopped OpenSSH server daemon.
Nov 26 18:03:38 np0005537197 systemd[1]: sshd.service: Consumed 1.955s CPU time, read 32.0K from disk, written 0B to disk.
Nov 26 18:03:38 np0005537197 systemd[1]: Stopped target sshd-keygen.target.
Nov 26 18:03:38 np0005537197 systemd[1]: Stopping sshd-keygen.target...
Nov 26 18:03:38 np0005537197 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 18:03:38 np0005537197 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 18:03:38 np0005537197 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 26 18:03:38 np0005537197 systemd[1]: Reached target sshd-keygen.target.
Nov 26 18:03:38 np0005537197 systemd[1]: Starting OpenSSH server daemon...
Nov 26 18:03:38 np0005537197 systemd[1]: Started OpenSSH server daemon.
Nov 26 18:03:40 np0005537197 podman[131126]: 2025-11-26 23:03:40.904396166 +0000 UTC m=+0.090896691 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 18:03:41 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 18:03:41 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 18:03:41 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:41 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:41 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:41 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 18:03:45 np0005537197 python3.9[134580]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 18:03:45 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:45 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:45 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:46 np0005537197 python3.9[135795]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 18:03:46 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:46 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:46 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:47 np0005537197 python3.9[136859]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 18:03:47 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:47 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:47 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:48 np0005537197 python3.9[137992]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 18:03:49 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:49 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:49 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:50 np0005537197 python3.9[139268]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:03:50 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:50 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:50 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:51 np0005537197 python3.9[140511]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:03:51 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 18:03:51 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 18:03:51 np0005537197 systemd[1]: man-db-cache-update.service: Consumed 12.916s CPU time.
Nov 26 18:03:51 np0005537197 systemd[1]: run-r9970deb2aa834384abb62d712d1095c8.service: Deactivated successfully.
Nov 26 18:03:52 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:52 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:52 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:53 np0005537197 python3.9[140881]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:03:53 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:53 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:53 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:54 np0005537197 python3.9[141071]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:03:55 np0005537197 python3.9[141226]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:03:55 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:56 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:56 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:57 np0005537197 python3.9[141416]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 26 18:03:58 np0005537197 systemd[1]: Reloading.
Nov 26 18:03:58 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:03:58 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:03:58 np0005537197 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 26 18:03:58 np0005537197 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 26 18:03:59 np0005537197 python3.9[141608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:00 np0005537197 python3.9[141763]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:01 np0005537197 python3.9[141918]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:03 np0005537197 python3.9[142073]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:04 np0005537197 python3.9[142228]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:05 np0005537197 python3.9[142383]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:06 np0005537197 python3.9[142538]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:07 np0005537197 podman[142665]: 2025-11-26 23:04:07.560183138 +0000 UTC m=+0.178495333 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Nov 26 18:04:07 np0005537197 python3.9[142710]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:08 np0005537197 python3.9[142875]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:04:09.601 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:04:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:04:09.602 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:04:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:04:09.602 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:04:09 np0005537197 python3.9[143030]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:10 np0005537197 python3.9[143185]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:11 np0005537197 podman[143312]: 2025-11-26 23:04:11.447527783 +0000 UTC m=+0.106809133 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 18:04:11 np0005537197 python3.9[143360]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:13 np0005537197 python3.9[143515]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:14 np0005537197 python3.9[143670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 26 18:04:15 np0005537197 python3.9[143825]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:04:16 np0005537197 python3.9[143977]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:04:17 np0005537197 python3.9[144129]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:04:18 np0005537197 python3.9[144281]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:04:18 np0005537197 python3.9[144433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:04:19 np0005537197 python3.9[144585]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:04:20 np0005537197 python3.9[144737]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:21 np0005537197 python3.9[144862]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198259.9395761-554-142478521007185/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:22 np0005537197 python3.9[145014]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:23 np0005537197 python3.9[145139]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198261.9113436-554-166008494280352/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:24 np0005537197 python3.9[145291]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:24 np0005537197 python3.9[145416]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198263.4570155-554-191761134506341/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:25 np0005537197 python3.9[145568]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:26 np0005537197 python3.9[145693]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198264.9084194-554-259022330403974/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:26 np0005537197 python3.9[145845]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:27 np0005537197 python3.9[145970]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198266.2667294-554-111709824692168/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:28 np0005537197 python3.9[146122]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:28 np0005537197 python3.9[146247]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198267.650825-554-68393715920138/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:29 np0005537197 python3.9[146399]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:30 np0005537197 python3.9[146522]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198268.985441-554-177712570910698/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:30 np0005537197 python3.9[146674]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:31 np0005537197 python3.9[146799]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764198270.3953114-554-7714512342635/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:32 np0005537197 python3.9[146951]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 26 18:04:33 np0005537197 python3.9[147104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:34 np0005537197 python3.9[147256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:35 np0005537197 python3.9[147408]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:35 np0005537197 python3.9[147560]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:36 np0005537197 python3.9[147712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:37 np0005537197 podman[147864]: 2025-11-26 23:04:37.828234254 +0000 UTC m=+0.207297726 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 18:04:37 np0005537197 python3.9[147865]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:38 np0005537197 python3.9[148039]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:39 np0005537197 python3.9[148191]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:40 np0005537197 python3.9[148343]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:41 np0005537197 python3.9[148495]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:41 np0005537197 podman[148595]: 2025-11-26 23:04:41.789459893 +0000 UTC m=+0.083865479 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 18:04:42 np0005537197 python3.9[148666]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:42 np0005537197 python3.9[148818]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:43 np0005537197 python3.9[148970]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:44 np0005537197 python3.9[149122]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:45 np0005537197 python3.9[149274]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:46 np0005537197 python3.9[149397]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198284.8558755-775-278215837153753/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:46 np0005537197 python3.9[149549]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:47 np0005537197 python3.9[149672]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198286.3253832-775-8051228149990/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:48 np0005537197 python3.9[149824]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:48 np0005537197 python3.9[149947]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198287.7410693-775-166331206207386/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:49 np0005537197 python3.9[150099]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:50 np0005537197 python3.9[150222]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198289.153552-775-29884836931413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:51 np0005537197 python3.9[150374]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:51 np0005537197 python3.9[150497]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198290.607027-775-228480311541520/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:52 np0005537197 python3.9[150649]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:53 np0005537197 python3.9[150772]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198292.1274834-775-119876725130767/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:54 np0005537197 python3.9[150924]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:54 np0005537197 python3.9[151047]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198293.602425-775-27715340570745/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:55 np0005537197 python3.9[151199]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:56 np0005537197 python3.9[151322]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198295.0293949-775-148163550638601/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:56 np0005537197 python3.9[151474]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:57 np0005537197 python3.9[151597]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198296.3354383-775-172334662164379/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:58 np0005537197 python3.9[151749]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:04:59 np0005537197 python3.9[151872]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198297.905341-775-110315075954095/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:04:59 np0005537197 python3.9[152024]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:00 np0005537197 python3.9[152147]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198299.3321354-775-5503887043129/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:01 np0005537197 python3.9[152299]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:02 np0005537197 python3.9[152422]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198300.9018872-775-19793989187549/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:02 np0005537197 python3.9[152574]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:03 np0005537197 python3.9[152697]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198302.436864-775-264147478072052/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:04 np0005537197 python3.9[152849]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:05 np0005537197 python3.9[152972]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198303.8622906-775-224991129258257/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:05 np0005537197 python3.9[153122]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:05:06 np0005537197 python3.9[153277]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 26 18:05:08 np0005537197 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 26 18:05:08 np0005537197 podman[153405]: 2025-11-26 23:05:08.737150728 +0000 UTC m=+0.143937354 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 26 18:05:08 np0005537197 python3.9[153453]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:05:09.603 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:05:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:05:09.604 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:05:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:05:09.604 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:05:09 np0005537197 python3.9[153614]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:10 np0005537197 python3.9[153766]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:11 np0005537197 python3.9[153918]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:12 np0005537197 podman[154042]: 2025-11-26 23:05:12.031580846 +0000 UTC m=+0.080955374 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 26 18:05:12 np0005537197 python3.9[154089]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:13 np0005537197 python3.9[154241]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:13 np0005537197 python3.9[154393]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:14 np0005537197 python3.9[154546]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:15 np0005537197 python3.9[154698]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:16 np0005537197 python3.9[154850]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:17 np0005537197 python3.9[155002]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:05:17 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:17 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:17 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:17 np0005537197 systemd[1]: Starting libvirt logging daemon socket...
Nov 26 18:05:17 np0005537197 systemd[1]: Listening on libvirt logging daemon socket.
Nov 26 18:05:17 np0005537197 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 26 18:05:17 np0005537197 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 26 18:05:17 np0005537197 systemd[1]: Starting libvirt logging daemon...
Nov 26 18:05:17 np0005537197 systemd[1]: Started libvirt logging daemon.
Nov 26 18:05:18 np0005537197 python3.9[155196]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:05:18 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:18 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:18 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:18 np0005537197 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 26 18:05:18 np0005537197 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 26 18:05:18 np0005537197 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 26 18:05:18 np0005537197 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 26 18:05:18 np0005537197 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 26 18:05:18 np0005537197 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 26 18:05:18 np0005537197 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 18:05:18 np0005537197 systemd[1]: Started libvirt nodedev daemon.
Nov 26 18:05:19 np0005537197 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 26 18:05:19 np0005537197 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 26 18:05:19 np0005537197 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 26 18:05:19 np0005537197 python3.9[155413]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:05:19 np0005537197 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 26 18:05:19 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:19 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:19 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:19 np0005537197 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 26 18:05:19 np0005537197 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 26 18:05:19 np0005537197 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 26 18:05:19 np0005537197 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 26 18:05:19 np0005537197 systemd[1]: Starting libvirt proxy daemon...
Nov 26 18:05:19 np0005537197 systemd[1]: Started libvirt proxy daemon.
Nov 26 18:05:20 np0005537197 setroubleshoot[155341]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l df75c397-fbb3-4270-bdd0-565f58842541
Nov 26 18:05:20 np0005537197 setroubleshoot[155341]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 26 18:05:20 np0005537197 setroubleshoot[155341]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l df75c397-fbb3-4270-bdd0-565f58842541
Nov 26 18:05:20 np0005537197 setroubleshoot[155341]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 26 18:05:20 np0005537197 python3.9[155634]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:05:20 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:20 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:21 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:21 np0005537197 systemd[1]: Listening on libvirt locking daemon socket.
Nov 26 18:05:21 np0005537197 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 26 18:05:21 np0005537197 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 26 18:05:21 np0005537197 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 26 18:05:21 np0005537197 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 26 18:05:21 np0005537197 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 26 18:05:21 np0005537197 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 26 18:05:21 np0005537197 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 26 18:05:21 np0005537197 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 26 18:05:21 np0005537197 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 26 18:05:21 np0005537197 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 18:05:21 np0005537197 systemd[1]: Started libvirt QEMU daemon.
Nov 26 18:05:22 np0005537197 python3.9[155849]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:05:22 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:22 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:22 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:22 np0005537197 systemd[1]: Starting libvirt secret daemon socket...
Nov 26 18:05:22 np0005537197 systemd[1]: Listening on libvirt secret daemon socket.
Nov 26 18:05:22 np0005537197 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 26 18:05:22 np0005537197 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 26 18:05:22 np0005537197 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 26 18:05:22 np0005537197 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 26 18:05:22 np0005537197 systemd[1]: Starting libvirt secret daemon...
Nov 26 18:05:22 np0005537197 systemd[1]: Started libvirt secret daemon.
Nov 26 18:05:23 np0005537197 python3.9[156062]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:24 np0005537197 python3.9[156214]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 18:05:25 np0005537197 python3.9[156366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:26 np0005537197 python3.9[156489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198325.1918378-1120-152682806645465/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:27 np0005537197 python3.9[156641]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:28 np0005537197 python3.9[156793]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:28 np0005537197 python3.9[156871]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:29 np0005537197 python3.9[157023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:30 np0005537197 python3.9[157101]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n5e9zov7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:30 np0005537197 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 26 18:05:30 np0005537197 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 26 18:05:30 np0005537197 python3.9[157253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:31 np0005537197 python3.9[157331]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:32 np0005537197 python3.9[157483]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:05:33 np0005537197 python3[157636]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 18:05:34 np0005537197 python3.9[157788]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:34 np0005537197 python3.9[157866]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:35 np0005537197 python3.9[158018]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:36 np0005537197 python3.9[158096]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:37 np0005537197 python3.9[158248]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:37 np0005537197 python3.9[158326]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:38 np0005537197 python3.9[158478]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:38 np0005537197 podman[158528]: 2025-11-26 23:05:38.952514572 +0000 UTC m=+0.141263253 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:05:40 np0005537197 python3.9[158573]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:40 np0005537197 python3.9[158734]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:41 np0005537197 python3.9[158859]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198340.2711806-1245-162534179715326/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:42 np0005537197 podman[158983]: 2025-11-26 23:05:42.358949549 +0000 UTC m=+0.074540970 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 18:05:42 np0005537197 python3.9[159029]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:43 np0005537197 python3.9[159181]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:05:44 np0005537197 python3.9[159336]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:45 np0005537197 python3.9[159488]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:05:46 np0005537197 python3.9[159641]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:05:47 np0005537197 python3.9[159795]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:05:48 np0005537197 python3.9[159950]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:48 np0005537197 python3.9[160102]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:49 np0005537197 python3.9[160225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198348.3632646-1317-190344979388912/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:50 np0005537197 python3.9[160377]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:51 np0005537197 python3.9[160500]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198349.9545436-1332-45347325743333/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:52 np0005537197 python3.9[160652]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:05:52 np0005537197 python3.9[160775]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198351.5221868-1347-183615960668837/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:05:53 np0005537197 python3.9[160927]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:05:53 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:53 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:54 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:55 np0005537197 systemd[1]: Reached target edpm_libvirt.target.
Nov 26 18:05:56 np0005537197 python3.9[161119]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 26 18:05:56 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:56 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:56 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:56 np0005537197 systemd[1]: Reloading.
Nov 26 18:05:56 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:05:56 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:05:57 np0005537197 systemd-logind[819]: Session 23 logged out. Waiting for processes to exit.
Nov 26 18:05:57 np0005537197 systemd[1]: session-23.scope: Deactivated successfully.
Nov 26 18:05:57 np0005537197 systemd[1]: session-23.scope: Consumed 3min 53.457s CPU time.
Nov 26 18:05:57 np0005537197 systemd-logind[819]: Removed session 23.
Nov 26 18:06:02 np0005537197 systemd-logind[819]: New session 24 of user zuul.
Nov 26 18:06:02 np0005537197 systemd[1]: Started Session 24 of User zuul.
Nov 26 18:06:03 np0005537197 python3.9[161369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 18:06:05 np0005537197 python3.9[161523]: ansible-ansible.builtin.service_facts Invoked
Nov 26 18:06:05 np0005537197 network[161540]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 18:06:05 np0005537197 network[161541]: 'network-scripts' will be removed from distribution in near future.
Nov 26 18:06:05 np0005537197 network[161542]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 18:06:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:06:09.605 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:06:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:06:09.607 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:06:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:06:09.607 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:06:09 np0005537197 podman[161638]: 2025-11-26 23:06:09.8560108 +0000 UTC m=+0.140655840 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 18:06:11 np0005537197 python3.9[161840]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 18:06:12 np0005537197 podman[161896]: 2025-11-26 23:06:12.594814297 +0000 UTC m=+0.075204746 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:06:12 np0005537197 python3.9[161943]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 18:06:18 np0005537197 python3.9[162096]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:06:19 np0005537197 python3.9[162248]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:06:20 np0005537197 python3.9[162401]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:06:21 np0005537197 python3.9[162553]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:06:22 np0005537197 python3.9[162706]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:06:23 np0005537197 python3.9[162829]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198381.883442-95-96510194941689/.source.iscsi _original_basename=.1z_m0psh follow=False checksum=36b8631a30aeed6fcd236a7432aa42b4ef7e472f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:24 np0005537197 python3.9[162981]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:25 np0005537197 python3.9[163133]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:25 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 18:06:25 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 18:06:26 np0005537197 python3.9[163286]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:06:26 np0005537197 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 26 18:06:27 np0005537197 python3.9[163442]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:06:27 np0005537197 systemd[1]: Reloading.
Nov 26 18:06:27 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:06:27 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:06:27 np0005537197 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 18:06:28 np0005537197 systemd[1]: Starting Open-iSCSI...
Nov 26 18:06:28 np0005537197 kernel: Loading iSCSI transport class v2.0-870.
Nov 26 18:06:28 np0005537197 systemd[1]: Started Open-iSCSI.
Nov 26 18:06:28 np0005537197 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 26 18:06:28 np0005537197 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 26 18:06:28 np0005537197 python3.9[163643]: ansible-ansible.builtin.service_facts Invoked
Nov 26 18:06:28 np0005537197 network[163660]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 18:06:28 np0005537197 network[163661]: 'network-scripts' will be removed from distribution in near future.
Nov 26 18:06:28 np0005537197 network[163662]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 18:06:34 np0005537197 python3.9[163933]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 18:06:35 np0005537197 python3.9[164085]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 26 18:06:35 np0005537197 python3.9[164241]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:06:36 np0005537197 python3.9[164364]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198395.3319602-172-5260064391239/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:38 np0005537197 python3.9[164516]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:39 np0005537197 python3.9[164668]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:06:39 np0005537197 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 18:06:39 np0005537197 systemd[1]: Stopped Load Kernel Modules.
Nov 26 18:06:39 np0005537197 systemd[1]: Stopping Load Kernel Modules...
Nov 26 18:06:39 np0005537197 systemd[1]: Starting Load Kernel Modules...
Nov 26 18:06:39 np0005537197 systemd[1]: Finished Load Kernel Modules.
Nov 26 18:06:40 np0005537197 podman[164796]: 2025-11-26 23:06:40.768805099 +0000 UTC m=+0.143493629 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 18:06:40 np0005537197 python3.9[164843]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:06:41 np0005537197 python3.9[165003]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:06:42 np0005537197 python3.9[165155]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:06:42 np0005537197 podman[165156]: 2025-11-26 23:06:42.786969803 +0000 UTC m=+0.075147825 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Nov 26 18:06:43 np0005537197 python3.9[165327]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:06:44 np0005537197 python3.9[165450]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198402.897916-230-237969895263821/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:45 np0005537197 python3.9[165602]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:06:45 np0005537197 python3.9[165755]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:46 np0005537197 python3.9[165907]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:47 np0005537197 python3.9[166059]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:48 np0005537197 python3.9[166211]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:49 np0005537197 python3.9[166363]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:49 np0005537197 python3.9[166515]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:50 np0005537197 python3.9[166667]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:51 np0005537197 python3.9[166819]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:06:52 np0005537197 python3.9[166973]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:53 np0005537197 python3.9[167125]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:06:54 np0005537197 python3.9[167277]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:06:54 np0005537197 python3.9[167355]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:06:55 np0005537197 python3.9[167507]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:06:56 np0005537197 python3.9[167585]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:06:56 np0005537197 python3.9[167737]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:57 np0005537197 python3.9[167889]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:06:58 np0005537197 python3.9[167967]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:06:58 np0005537197 python3.9[168119]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:06:59 np0005537197 python3.9[168197]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:00 np0005537197 python3.9[168349]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:00 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:00 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:00 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:01 np0005537197 python3.9[168538]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:07:01 np0005537197 python3.9[168616]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:02 np0005537197 python3.9[168768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:07:03 np0005537197 python3.9[168846]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:03 np0005537197 python3.9[168998]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:04 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:04 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:04 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:04 np0005537197 systemd[1]: Starting Create netns directory...
Nov 26 18:07:04 np0005537197 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 26 18:07:04 np0005537197 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 26 18:07:04 np0005537197 systemd[1]: Finished Create netns directory.
Nov 26 18:07:05 np0005537197 python3.9[169191]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:07:06 np0005537197 python3.9[169343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:07:07 np0005537197 python3.9[169466]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198425.7305737-437-270254226962593/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:07:08 np0005537197 python3.9[169618]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:07:09 np0005537197 python3.9[169770]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:07:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:07:09.607 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:07:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:07:09.608 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:07:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:07:09.608 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:07:09 np0005537197 python3.9[169893]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198428.3981628-462-239694862328733/.source.json _original_basename=.el5mcbwi follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:10 np0005537197 python3.9[170045]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:10 np0005537197 podman[170169]: 2025-11-26 23:07:10.981614229 +0000 UTC m=+0.098330631 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 18:07:12 np0005537197 python3.9[170499]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 26 18:07:13 np0005537197 podman[170623]: 2025-11-26 23:07:13.764641342 +0000 UTC m=+0.114780175 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 18:07:13 np0005537197 python3.9[170669]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:07:15 np0005537197 python3.9[170821]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 26 18:07:16 np0005537197 python3[170999]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:07:16 np0005537197 podman[171037]: 2025-11-26 23:07:16.953487125 +0000 UTC m=+0.066268813 container create 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 18:07:16 np0005537197 podman[171037]: 2025-11-26 23:07:16.924376985 +0000 UTC m=+0.037158663 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 26 18:07:16 np0005537197 python3[170999]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 26 18:07:17 np0005537197 python3.9[171227]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:07:18 np0005537197 python3.9[171381]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:18 np0005537197 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 26 18:07:18 np0005537197 python3.9[171458]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:07:19 np0005537197 python3.9[171609]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198439.0660813-550-178049495539386/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:19 np0005537197 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 18:07:20 np0005537197 python3.9[171686]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:07:20 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:20 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:20 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:21 np0005537197 python3.9[171797]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:21 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:21 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:21 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:21 np0005537197 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 26 18:07:21 np0005537197 systemd[1]: Starting multipathd container...
Nov 26 18:07:21 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:07:21 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897cb0b4e5f2aa2533e9f87d693dd54dd7d06ddf61d1580b8694627cb234cffa/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 18:07:21 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897cb0b4e5f2aa2533e9f87d693dd54dd7d06ddf61d1580b8694627cb234cffa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 18:07:21 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.
Nov 26 18:07:21 np0005537197 podman[171838]: 2025-11-26 23:07:21.782853187 +0000 UTC m=+0.155502542 container init 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 18:07:21 np0005537197 multipathd[171854]: + sudo -E kolla_set_configs
Nov 26 18:07:21 np0005537197 podman[171838]: 2025-11-26 23:07:21.818706465 +0000 UTC m=+0.191355770 container start 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 26 18:07:21 np0005537197 podman[171838]: multipathd
Nov 26 18:07:21 np0005537197 systemd[1]: Started multipathd container.
Nov 26 18:07:21 np0005537197 podman[171861]: 2025-11-26 23:07:21.909003332 +0000 UTC m=+0.071864551 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:07:21 np0005537197 multipathd[171854]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:07:21 np0005537197 multipathd[171854]: INFO:__main__:Validating config file
Nov 26 18:07:21 np0005537197 multipathd[171854]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:07:21 np0005537197 multipathd[171854]: INFO:__main__:Writing out command to execute
Nov 26 18:07:21 np0005537197 systemd[1]: 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531-520b0c96c054129d.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:07:21 np0005537197 systemd[1]: 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531-520b0c96c054129d.service: Failed with result 'exit-code'.
Nov 26 18:07:21 np0005537197 multipathd[171854]: ++ cat /run_command
Nov 26 18:07:21 np0005537197 multipathd[171854]: + CMD='/usr/sbin/multipathd -d'
Nov 26 18:07:21 np0005537197 multipathd[171854]: + ARGS=
Nov 26 18:07:21 np0005537197 multipathd[171854]: + sudo kolla_copy_cacerts
Nov 26 18:07:21 np0005537197 multipathd[171854]: + [[ ! -n '' ]]
Nov 26 18:07:21 np0005537197 multipathd[171854]: + . kolla_extend_start
Nov 26 18:07:21 np0005537197 multipathd[171854]: Running command: '/usr/sbin/multipathd -d'
Nov 26 18:07:21 np0005537197 multipathd[171854]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 18:07:21 np0005537197 multipathd[171854]: + umask 0022
Nov 26 18:07:21 np0005537197 multipathd[171854]: + exec /usr/sbin/multipathd -d
Nov 26 18:07:21 np0005537197 multipathd[171854]: 3124.690649 | --------start up--------
Nov 26 18:07:21 np0005537197 multipathd[171854]: 3124.690668 | read /etc/multipath.conf
Nov 26 18:07:21 np0005537197 multipathd[171854]: 3124.698477 | path checkers start up
Nov 26 18:07:22 np0005537197 python3.9[172043]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:07:22 np0005537197 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 26 18:07:23 np0005537197 python3.9[172198]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:07:24 np0005537197 python3.9[172363]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:07:24 np0005537197 systemd[1]: Stopping multipathd container...
Nov 26 18:07:24 np0005537197 multipathd[171854]: 3127.417656 | exit (signal)
Nov 26 18:07:24 np0005537197 multipathd[171854]: 3127.418546 | --------shut down-------
Nov 26 18:07:24 np0005537197 systemd[1]: libpod-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope: Deactivated successfully.
Nov 26 18:07:24 np0005537197 conmon[171854]: conmon 2b636e6822498465779f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope/container/memory.events
Nov 26 18:07:24 np0005537197 podman[172367]: 2025-11-26 23:07:24.743642371 +0000 UTC m=+0.113789949 container died 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 26 18:07:24 np0005537197 systemd[1]: 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531-520b0c96c054129d.timer: Deactivated successfully.
Nov 26 18:07:24 np0005537197 systemd[1]: Stopped /usr/bin/podman healthcheck run 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.
Nov 26 18:07:24 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531-userdata-shm.mount: Deactivated successfully.
Nov 26 18:07:24 np0005537197 systemd[1]: var-lib-containers-storage-overlay-897cb0b4e5f2aa2533e9f87d693dd54dd7d06ddf61d1580b8694627cb234cffa-merged.mount: Deactivated successfully.
Nov 26 18:07:24 np0005537197 podman[172367]: 2025-11-26 23:07:24.81965198 +0000 UTC m=+0.189799558 container cleanup 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 18:07:24 np0005537197 podman[172367]: multipathd
Nov 26 18:07:24 np0005537197 podman[172396]: multipathd
Nov 26 18:07:24 np0005537197 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 26 18:07:24 np0005537197 systemd[1]: Stopped multipathd container.
Nov 26 18:07:24 np0005537197 systemd[1]: Starting multipathd container...
Nov 26 18:07:25 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:07:25 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897cb0b4e5f2aa2533e9f87d693dd54dd7d06ddf61d1580b8694627cb234cffa/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 18:07:25 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/897cb0b4e5f2aa2533e9f87d693dd54dd7d06ddf61d1580b8694627cb234cffa/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 18:07:25 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.
Nov 26 18:07:25 np0005537197 podman[172409]: 2025-11-26 23:07:25.186023996 +0000 UTC m=+0.213825334 container init 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 18:07:25 np0005537197 multipathd[172424]: + sudo -E kolla_set_configs
Nov 26 18:07:25 np0005537197 podman[172409]: 2025-11-26 23:07:25.227757019 +0000 UTC m=+0.255558357 container start 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:07:25 np0005537197 podman[172409]: multipathd
Nov 26 18:07:25 np0005537197 systemd[1]: Started multipathd container.
Nov 26 18:07:25 np0005537197 multipathd[172424]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:07:25 np0005537197 multipathd[172424]: INFO:__main__:Validating config file
Nov 26 18:07:25 np0005537197 multipathd[172424]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:07:25 np0005537197 multipathd[172424]: INFO:__main__:Writing out command to execute
Nov 26 18:07:25 np0005537197 multipathd[172424]: ++ cat /run_command
Nov 26 18:07:25 np0005537197 multipathd[172424]: + CMD='/usr/sbin/multipathd -d'
Nov 26 18:07:25 np0005537197 multipathd[172424]: + ARGS=
Nov 26 18:07:25 np0005537197 multipathd[172424]: + sudo kolla_copy_cacerts
Nov 26 18:07:25 np0005537197 podman[172431]: 2025-11-26 23:07:25.330042013 +0000 UTC m=+0.085255455 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:07:25 np0005537197 systemd[1]: 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531-462c69b32d1b9618.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:07:25 np0005537197 systemd[1]: 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531-462c69b32d1b9618.service: Failed with result 'exit-code'.
Nov 26 18:07:25 np0005537197 multipathd[172424]: + [[ ! -n '' ]]
Nov 26 18:07:25 np0005537197 multipathd[172424]: + . kolla_extend_start
Nov 26 18:07:25 np0005537197 multipathd[172424]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 26 18:07:25 np0005537197 multipathd[172424]: Running command: '/usr/sbin/multipathd -d'
Nov 26 18:07:25 np0005537197 multipathd[172424]: + umask 0022
Nov 26 18:07:25 np0005537197 multipathd[172424]: + exec /usr/sbin/multipathd -d
Nov 26 18:07:25 np0005537197 multipathd[172424]: 3128.090167 | --------start up--------
Nov 26 18:07:25 np0005537197 multipathd[172424]: 3128.090188 | read /etc/multipath.conf
Nov 26 18:07:25 np0005537197 multipathd[172424]: 3128.098254 | path checkers start up
Nov 26 18:07:26 np0005537197 python3.9[172613]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:27 np0005537197 python3.9[172765]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 26 18:07:28 np0005537197 python3.9[172917]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 26 18:07:28 np0005537197 kernel: Key type psk registered
Nov 26 18:07:28 np0005537197 python3.9[173079]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:07:29 np0005537197 python3.9[173202]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198448.3755012-630-184543740933718/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:30 np0005537197 python3.9[173354]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:31 np0005537197 python3.9[173506]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:07:31 np0005537197 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 26 18:07:31 np0005537197 systemd[1]: Stopped Load Kernel Modules.
Nov 26 18:07:31 np0005537197 systemd[1]: Stopping Load Kernel Modules...
Nov 26 18:07:31 np0005537197 systemd[1]: Starting Load Kernel Modules...
Nov 26 18:07:31 np0005537197 systemd[1]: Finished Load Kernel Modules.
Nov 26 18:07:32 np0005537197 python3.9[173662]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 18:07:34 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:35 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:35 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:35 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:35 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:35 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:35 np0005537197 systemd-logind[819]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 26 18:07:35 np0005537197 systemd-logind[819]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 26 18:07:35 np0005537197 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 26 18:07:35 np0005537197 systemd[1]: Starting man-db-cache-update.service...
Nov 26 18:07:35 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:36 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:36 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:36 np0005537197 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 26 18:07:37 np0005537197 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 26 18:07:37 np0005537197 systemd[1]: Finished man-db-cache-update.service.
Nov 26 18:07:37 np0005537197 systemd[1]: man-db-cache-update.service: Consumed 1.961s CPU time.
Nov 26 18:07:37 np0005537197 systemd[1]: run-r31afd44b507946639dd7484bc4ac1cae.service: Deactivated successfully.
Nov 26 18:07:38 np0005537197 python3.9[175093]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:07:38 np0005537197 systemd[1]: Stopping Open-iSCSI...
Nov 26 18:07:38 np0005537197 iscsid[163482]: iscsid shutting down.
Nov 26 18:07:38 np0005537197 systemd[1]: iscsid.service: Deactivated successfully.
Nov 26 18:07:38 np0005537197 systemd[1]: Stopped Open-iSCSI.
Nov 26 18:07:38 np0005537197 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 26 18:07:38 np0005537197 systemd[1]: Starting Open-iSCSI...
Nov 26 18:07:38 np0005537197 systemd[1]: Started Open-iSCSI.
Nov 26 18:07:39 np0005537197 python3.9[175268]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 18:07:40 np0005537197 python3.9[175424]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:41 np0005537197 podman[175548]: 2025-11-26 23:07:41.471486499 +0000 UTC m=+0.199420533 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 18:07:41 np0005537197 python3.9[175593]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:07:41 np0005537197 systemd[1]: Reloading.
Nov 26 18:07:41 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:07:41 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:07:42 np0005537197 python3.9[175788]: ansible-ansible.builtin.service_facts Invoked
Nov 26 18:07:43 np0005537197 network[175805]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 18:07:43 np0005537197 network[175806]: 'network-scripts' will be removed from distribution in near future.
Nov 26 18:07:43 np0005537197 network[175807]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 18:07:44 np0005537197 podman[175812]: 2025-11-26 23:07:43.999056179 +0000 UTC m=+0.103520068 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 26 18:07:49 np0005537197 python3.9[176101]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:50 np0005537197 python3.9[176254]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:51 np0005537197 python3.9[176407]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:52 np0005537197 python3.9[176560]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:53 np0005537197 python3.9[176713]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:54 np0005537197 python3.9[176866]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:55 np0005537197 python3.9[177019]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:55 np0005537197 podman[177021]: 2025-11-26 23:07:55.567675215 +0000 UTC m=+0.090345540 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd)
Nov 26 18:07:56 np0005537197 python3.9[177193]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:07:57 np0005537197 python3.9[177346]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:58 np0005537197 python3.9[177498]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:59 np0005537197 python3.9[177650]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:07:59 np0005537197 python3.9[177802]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:00 np0005537197 python3.9[177954]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:01 np0005537197 python3.9[178106]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:02 np0005537197 python3.9[178258]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:02 np0005537197 python3.9[178410]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:03 np0005537197 python3.9[178562]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:04 np0005537197 python3.9[178714]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:05 np0005537197 python3.9[178868]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:06 np0005537197 python3.9[179020]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:06 np0005537197 python3.9[179172]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:07 np0005537197 python3.9[179324]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:08 np0005537197 python3.9[179476]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:09 np0005537197 python3.9[179628]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:08:09.609 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:08:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:08:09.609 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:08:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:08:09.610 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:08:10 np0005537197 python3.9[179780]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:11 np0005537197 python3.9[179932]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 18:08:11 np0005537197 podman[180009]: 2025-11-26 23:08:11.849709605 +0000 UTC m=+0.131847891 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 18:08:12 np0005537197 python3.9[180110]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:08:12 np0005537197 systemd[1]: Reloading.
Nov 26 18:08:12 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:08:12 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:08:13 np0005537197 python3.9[180297]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:14 np0005537197 python3.9[180450]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:14 np0005537197 podman[180536]: 2025-11-26 23:08:14.807218862 +0000 UTC m=+0.093752782 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 26 18:08:15 np0005537197 python3.9[180623]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:15 np0005537197 python3.9[180776]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:16 np0005537197 python3.9[180929]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:17 np0005537197 python3.9[181082]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:18 np0005537197 python3.9[181235]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:19 np0005537197 python3.9[181388]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:08:20 np0005537197 python3.9[181541]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:21 np0005537197 python3.9[181693]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:22 np0005537197 python3.9[181845]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:23 np0005537197 python3.9[181997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:23 np0005537197 python3.9[182149]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:24 np0005537197 python3.9[182301]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:25 np0005537197 python3.9[182453]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:25 np0005537197 podman[182498]: 2025-11-26 23:08:25.797135025 +0000 UTC m=+0.088786450 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 18:08:26 np0005537197 python3.9[182624]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:27 np0005537197 python3.9[182776]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:28 np0005537197 python3.9[182928]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:33 np0005537197 python3.9[183080]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 26 18:08:34 np0005537197 python3.9[183233]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 18:08:35 np0005537197 python3.9[183391]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 18:08:36 np0005537197 systemd-logind[819]: New session 25 of user zuul.
Nov 26 18:08:36 np0005537197 systemd[1]: Started Session 25 of User zuul.
Nov 26 18:08:36 np0005537197 systemd[1]: session-25.scope: Deactivated successfully.
Nov 26 18:08:36 np0005537197 systemd-logind[819]: Session 25 logged out. Waiting for processes to exit.
Nov 26 18:08:36 np0005537197 systemd-logind[819]: Removed session 25.
Nov 26 18:08:38 np0005537197 python3.9[183577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:38 np0005537197 python3.9[183698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198517.627367-1229-262769742712641/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:39 np0005537197 python3.9[183848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:40 np0005537197 python3.9[183924]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:40 np0005537197 python3.9[184074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:41 np0005537197 python3.9[184195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198520.4072037-1229-83104386209196/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:42 np0005537197 podman[184319]: 2025-11-26 23:08:42.378676813 +0000 UTC m=+0.158456054 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 18:08:42 np0005537197 python3.9[184362]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:43 np0005537197 python3.9[184493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198521.8340464-1229-74694492481174/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:43 np0005537197 python3.9[184643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:44 np0005537197 python3.9[184764]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198523.4129984-1229-208441785705896/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:45 np0005537197 podman[184888]: 2025-11-26 23:08:45.369608614 +0000 UTC m=+0.085369920 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:08:45 np0005537197 python3.9[184931]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:46 np0005537197 python3.9[185054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198524.8762805-1229-34837410811645/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:47 np0005537197 python3.9[185206]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:47 np0005537197 python3.9[185358]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:08:48 np0005537197 python3.9[185510]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:08:49 np0005537197 python3.9[185662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:50 np0005537197 python3.9[185785]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764198529.0878341-1336-96455447092038/.source _original_basename=.i0aqi1df follow=False checksum=8579b1eeea24549f9e8382a4102ae20c6876b737 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 26 18:08:51 np0005537197 python3.9[185937]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:08:52 np0005537197 python3.9[186089]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:53 np0005537197 python3.9[186210]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198531.8521836-1362-186021080522544/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:53 np0005537197 python3.9[186360]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:08:54 np0005537197 python3.9[186481]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198533.3166046-1377-159053778836493/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:08:55 np0005537197 python3.9[186633]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 26 18:08:56 np0005537197 podman[186757]: 2025-11-26 23:08:56.341582812 +0000 UTC m=+0.088313108 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:08:56 np0005537197 python3.9[186800]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:08:57 np0005537197 python3[186957]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:08:57 np0005537197 podman[186995]: 2025-11-26 23:08:57.945841708 +0000 UTC m=+0.084274651 container create 6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 18:08:57 np0005537197 podman[186995]: 2025-11-26 23:08:57.906847676 +0000 UTC m=+0.045280689 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 26 18:08:57 np0005537197 python3[186957]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 26 18:08:58 np0005537197 python3.9[187186]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:00 np0005537197 python3.9[187340]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 26 18:09:01 np0005537197 python3.9[187492]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:09:02 np0005537197 python3[187644]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:09:02 np0005537197 podman[187681]: 2025-11-26 23:09:02.285522594 +0000 UTC m=+0.051533533 container create 020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute)
Nov 26 18:09:02 np0005537197 podman[187681]: 2025-11-26 23:09:02.256027554 +0000 UTC m=+0.022038503 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 26 18:09:02 np0005537197 python3[187644]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 26 18:09:03 np0005537197 python3.9[187870]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:04 np0005537197 python3.9[188024]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:05 np0005537197 python3.9[188175]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198544.471679-1469-230566803337432/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:06 np0005537197 python3.9[188251]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:09:06 np0005537197 systemd[1]: Reloading.
Nov 26 18:09:06 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:09:06 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:09:07 np0005537197 python3.9[188362]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:09:07 np0005537197 systemd[1]: Reloading.
Nov 26 18:09:07 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:09:07 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:09:07 np0005537197 systemd[1]: Starting nova_compute container...
Nov 26 18:09:07 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:09:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:07 np0005537197 podman[188403]: 2025-11-26 23:09:07.729174624 +0000 UTC m=+0.150429203 container init 020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:09:07 np0005537197 podman[188403]: 2025-11-26 23:09:07.742164135 +0000 UTC m=+0.163418714 container start 020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 18:09:07 np0005537197 podman[188403]: nova_compute
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + sudo -E kolla_set_configs
Nov 26 18:09:07 np0005537197 systemd[1]: Started nova_compute container.
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Validating config file
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying service configuration files
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Deleting /etc/ceph
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Creating directory /etc/ceph
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Writing out command to execute
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:07 np0005537197 nova_compute[188418]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 18:09:07 np0005537197 nova_compute[188418]: ++ cat /run_command
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + CMD=nova-compute
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + ARGS=
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + sudo kolla_copy_cacerts
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + [[ ! -n '' ]]
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + . kolla_extend_start
Nov 26 18:09:07 np0005537197 nova_compute[188418]: Running command: 'nova-compute'
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + umask 0022
Nov 26 18:09:07 np0005537197 nova_compute[188418]: + exec nova-compute
Nov 26 18:09:08 np0005537197 python3.9[188580]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:09:09.609 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:09:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:09:09.610 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:09:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:09:09.610 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:09:09 np0005537197 python3.9[188730]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:09 np0005537197 nova_compute[188418]: 2025-11-26 23:09:09.734 188422 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 18:09:09 np0005537197 nova_compute[188418]: 2025-11-26 23:09:09.734 188422 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 18:09:09 np0005537197 nova_compute[188418]: 2025-11-26 23:09:09.735 188422 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 18:09:09 np0005537197 nova_compute[188418]: 2025-11-26 23:09:09.735 188422 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 26 18:09:09 np0005537197 nova_compute[188418]: 2025-11-26 23:09:09.858 188422 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 18:09:09 np0005537197 nova_compute[188418]: 2025-11-26 23:09:09.886 188422 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 18:09:09 np0005537197 nova_compute[188418]: 2025-11-26 23:09:09.887 188422 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.588 188422 INFO nova.virt.driver [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 26 18:09:10 np0005537197 python3.9[188884]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.715 188422 INFO nova.compute.provider_config [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.812 188422 DEBUG oslo_concurrency.lockutils [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.813 188422 DEBUG oslo_concurrency.lockutils [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.813 188422 DEBUG oslo_concurrency.lockutils [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.814 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.814 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.814 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.814 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.814 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.815 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.815 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.815 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.815 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.815 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.816 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.816 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.816 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.817 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.817 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.817 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.817 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.817 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.818 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.818 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.818 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.818 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.818 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.819 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.819 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.819 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.819 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.820 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.820 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.820 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.820 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.821 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.821 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.821 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.821 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.822 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.822 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.822 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.822 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.823 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.823 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.823 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.823 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.824 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.824 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.824 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.824 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.825 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.825 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.825 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.825 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.825 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.826 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.826 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.826 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.826 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.826 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.827 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.827 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.827 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.827 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.827 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.827 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.828 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.828 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.828 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.828 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.828 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.829 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.829 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.829 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.829 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.829 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.830 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.830 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.830 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.830 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.831 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.831 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.831 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.832 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.832 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.832 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.832 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.833 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.833 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.833 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.833 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.834 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.834 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.834 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.834 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.835 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.835 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.835 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.835 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.835 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.836 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.836 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.836 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.836 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.836 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.837 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.837 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.837 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.837 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.837 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.838 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.838 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.838 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.838 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.838 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.839 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.839 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.839 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.839 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.840 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.840 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.840 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.840 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.840 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.841 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.841 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.841 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.841 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.841 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.842 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.842 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.842 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.842 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.842 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.843 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.843 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.843 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.843 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.843 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.844 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.844 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.844 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.844 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.844 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.845 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.845 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.845 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.845 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.845 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.845 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.846 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.846 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.846 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.846 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.847 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.847 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.847 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.847 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.847 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.848 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.848 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.848 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.848 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.849 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.849 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.849 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.849 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.849 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.850 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.850 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.850 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.850 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.850 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.851 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.851 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.851 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.851 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.851 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.852 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.852 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.852 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.852 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.853 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.853 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.853 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.853 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.853 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.854 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.854 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.854 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.854 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.855 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.855 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.855 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.855 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.855 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.856 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.856 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.856 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.856 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.856 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.857 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.857 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.857 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.857 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.857 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.858 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.858 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.858 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.858 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.859 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.859 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.859 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.859 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.860 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.860 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.860 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.860 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.860 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.861 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.861 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.861 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.861 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.861 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.861 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.861 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.862 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.862 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.862 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.862 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.862 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.862 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.862 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.863 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.863 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.863 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.863 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.863 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.864 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.864 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.864 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.864 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.864 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.864 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.865 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.865 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.865 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.865 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.865 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.865 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.866 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.867 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.867 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.867 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.867 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.867 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.867 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.868 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.868 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.868 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.868 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.868 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.868 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.868 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.869 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.869 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.869 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.869 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.869 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.869 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.869 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.870 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.870 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.870 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.870 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.870 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.870 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.871 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.871 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.871 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.871 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.871 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.871 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.871 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.872 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.872 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.872 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.872 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.872 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.872 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.872 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.873 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.873 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.873 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.873 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.873 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.873 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.873 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.874 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.874 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.874 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.874 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.874 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.875 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.875 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.875 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.875 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.875 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.875 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.875 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.876 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.876 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.876 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.876 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.876 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.876 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.877 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.877 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.877 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.877 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.877 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.877 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.877 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.878 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.878 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.878 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.878 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.878 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.878 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.878 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.879 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.879 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.879 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.879 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.879 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.879 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.879 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.880 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.881 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.881 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.881 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.881 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.881 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.881 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.882 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.882 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.882 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.882 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.882 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.882 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.883 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.883 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.883 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.883 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.883 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.883 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.883 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.884 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.884 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.884 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.884 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.884 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.884 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.884 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.885 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.885 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.885 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.885 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.885 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.885 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.885 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.886 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.886 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.886 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.886 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.886 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.886 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.886 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.887 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.887 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.887 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.887 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.887 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.887 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.887 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.888 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.888 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.888 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.888 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.888 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.888 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.888 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.889 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.889 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.889 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.889 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.889 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.889 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.889 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.890 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.890 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.890 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.890 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.890 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.890 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.890 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.891 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.891 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.891 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.891 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.891 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.891 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.891 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.892 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.892 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.892 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.892 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.892 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.892 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.892 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.893 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.893 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.893 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.893 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.893 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.893 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.893 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.894 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.894 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.894 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.894 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.894 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.894 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.895 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.895 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.895 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.895 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.895 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.896 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.896 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.896 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.896 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.896 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.896 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.897 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.897 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.897 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.897 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.897 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.897 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.897 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.898 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.898 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.898 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.898 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.898 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.898 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.898 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.899 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.899 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.899 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.899 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.899 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.899 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.899 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.900 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.900 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.900 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.900 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.900 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.900 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.901 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.901 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.901 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.901 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.901 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.901 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.902 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.902 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.902 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.902 188422 WARNING oslo_config.cfg [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 18:09:10 np0005537197 nova_compute[188418]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 18:09:10 np0005537197 nova_compute[188418]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 18:09:10 np0005537197 nova_compute[188418]: and ``live_migration_inbound_addr`` respectively.
Nov 26 18:09:10 np0005537197 nova_compute[188418]: ).  Its value may be silently ignored in the future.#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.902 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.902 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.903 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.903 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.903 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.903 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.903 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.903 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.903 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.904 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.904 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.904 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.904 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.904 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.904 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.905 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.905 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.905 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.905 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.905 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.905 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.905 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.906 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.906 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.906 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.906 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.906 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.906 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.907 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.907 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.907 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.907 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.907 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.907 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.908 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.908 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.908 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.908 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.908 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.908 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.908 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.909 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.909 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.909 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.909 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.909 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.909 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.909 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.910 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.910 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.910 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.910 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.910 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.910 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.910 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.911 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.911 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.911 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.911 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.911 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.911 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.911 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.912 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.912 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.912 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.912 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.912 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.912 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.912 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.913 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.913 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.913 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.913 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.913 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.913 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.913 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.914 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.914 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.914 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.914 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.914 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.914 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.914 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.915 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.915 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.915 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.915 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.915 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.915 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.916 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.916 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.916 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.916 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.916 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.916 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.916 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.917 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.917 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.917 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.917 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.917 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.917 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.917 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.918 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.918 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.918 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.918 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.918 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.918 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.919 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.919 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.919 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.919 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.919 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.919 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.919 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.920 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.920 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.920 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.920 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.920 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.920 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.921 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.921 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.921 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.921 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.921 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.921 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.921 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.922 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.922 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.922 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.922 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.922 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.922 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.922 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.923 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.923 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.923 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.923 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.923 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.923 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.923 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.924 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.924 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.924 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.924 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.924 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.924 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.924 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.925 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.925 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.925 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.925 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.925 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.925 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.925 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.926 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.926 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.926 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.926 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.926 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.926 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.926 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.927 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.927 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.927 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.927 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.927 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.927 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.927 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.928 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.928 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.928 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.928 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.928 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.928 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.928 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.929 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.929 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.929 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.929 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.929 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.929 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.930 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.931 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.931 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.931 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.931 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.931 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.931 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.932 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.932 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.932 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.932 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.932 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.932 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.932 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.933 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.934 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.934 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.934 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.934 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.934 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.934 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.934 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.935 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.935 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.935 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.935 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.935 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.935 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.935 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.936 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.937 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.937 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.937 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.937 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.937 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.937 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.937 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.938 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.938 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.938 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.938 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.938 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.938 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.939 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.939 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.939 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.939 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.939 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.939 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.939 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.940 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.940 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.940 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.940 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.940 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.940 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.940 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.941 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.941 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.941 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.941 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.941 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.941 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.941 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.942 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.942 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.942 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.942 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.942 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.942 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.942 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.943 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.943 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.943 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.943 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.943 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.943 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.943 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.944 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.944 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.944 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.944 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.944 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.944 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.944 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.945 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.945 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.945 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.945 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.945 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.945 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.945 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.946 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.946 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.946 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.946 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.946 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.946 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.946 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.947 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.947 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.947 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.947 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.947 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.947 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.947 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.948 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.949 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.949 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.949 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.949 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.949 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.949 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.949 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.950 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.950 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.950 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.950 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.950 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.950 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.950 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.951 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.951 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.951 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.951 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.951 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.951 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.951 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.952 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.952 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.952 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.952 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.952 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.952 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.952 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.953 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.953 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.953 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.953 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.953 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.953 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.953 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.954 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.954 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.954 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.954 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.954 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.954 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.954 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.955 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.956 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.956 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.956 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.956 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.956 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.956 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.956 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.957 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.957 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.957 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.957 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.957 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.957 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.957 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.958 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.958 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.958 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.958 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.958 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.958 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.958 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.959 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.959 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.959 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.959 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.959 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.959 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.960 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.960 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.960 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.960 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.960 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.960 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.960 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.961 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.962 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.962 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.962 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.962 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.962 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.962 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.962 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.963 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.963 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.963 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.963 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.963 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.963 188422 DEBUG oslo_service.service [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.964 188422 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.983 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.984 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.984 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 26 18:09:10 np0005537197 nova_compute[188418]: 2025-11-26 23:09:10.984 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 26 18:09:11 np0005537197 systemd[1]: Starting libvirt QEMU daemon...
Nov 26 18:09:11 np0005537197 systemd[1]: Started libvirt QEMU daemon.
Nov 26 18:09:11 np0005537197 nova_compute[188418]: 2025-11-26 23:09:11.086 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5d25bedb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 26 18:09:11 np0005537197 nova_compute[188418]: 2025-11-26 23:09:11.090 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5d25bedb20> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 26 18:09:11 np0005537197 nova_compute[188418]: 2025-11-26 23:09:11.091 188422 INFO nova.virt.libvirt.driver [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 26 18:09:11 np0005537197 nova_compute[188418]: 2025-11-26 23:09:11.111 188422 WARNING nova.virt.libvirt.driver [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 26 18:09:11 np0005537197 nova_compute[188418]: 2025-11-26 23:09:11.112 188422 DEBUG nova.virt.libvirt.volume.mount [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 26 18:09:11 np0005537197 python3.9[189088]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 18:09:12 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.041 188422 INFO nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Libvirt host capabilities <capabilities>
Nov 26 18:09:12 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <host>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <uuid>d7e69efc-d84d-4224-8bbd-5fd303612f05</uuid>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <arch>x86_64</arch>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model>EPYC-Rome-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <vendor>AMD</vendor>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <microcode version='16777317'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <signature family='23' model='49' stepping='0'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='x2apic'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='tsc-deadline'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='osxsave'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='hypervisor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='tsc_adjust'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='spec-ctrl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='stibp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='arch-capabilities'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='cmp_legacy'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='topoext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='virt-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='lbrv'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='tsc-scale'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='vmcb-clean'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='pause-filter'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='pfthreshold'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='svme-addr-chk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='rdctl-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='skip-l1dfl-vmentry'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='mds-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature name='pschange-mc-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <pages unit='KiB' size='4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <pages unit='KiB' size='2048'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <pages unit='KiB' size='1048576'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <power_management>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <suspend_mem/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <suspend_disk/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <suspend_hybrid/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </power_management>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <iommu support='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <migration_features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <live/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <uri_transports>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <uri_transport>tcp</uri_transport>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <uri_transport>rdma</uri_transport>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </uri_transports>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </migration_features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <topology>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <cells num='1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <cell id='0'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          <memory unit='KiB'>7864324</memory>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          <pages unit='KiB' size='2048'>0</pages>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          <distances>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <sibling id='0' value='10'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          </distances>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          <cpus num='8'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:          </cpus>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        </cell>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </cells>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </topology>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <cache>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </cache>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <secmodel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model>selinux</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <doi>0</doi>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </secmodel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <secmodel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model>dac</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <doi>0</doi>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </secmodel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </host>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <guest>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <os_type>hvm</os_type>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <arch name='i686'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <wordsize>32</wordsize>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <domain type='qemu'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <domain type='kvm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </arch>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <pae/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <nonpae/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <acpi default='on' toggle='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <apic default='on' toggle='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <cpuselection/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <deviceboot/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <disksnapshot default='on' toggle='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <externalSnapshot/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </guest>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <guest>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <os_type>hvm</os_type>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <arch name='x86_64'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <wordsize>64</wordsize>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <domain type='qemu'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <domain type='kvm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </arch>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <acpi default='on' toggle='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <apic default='on' toggle='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <cpuselection/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <deviceboot/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <disksnapshot default='on' toggle='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <externalSnapshot/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </guest>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 
Nov 26 18:09:12 np0005537197 nova_compute[188418]: </capabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: #033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.050 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.084 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 26 18:09:12 np0005537197 nova_compute[188418]: <domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <domain>kvm</domain>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <arch>i686</arch>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <vcpu max='240'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <iothreads supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <os supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='firmware'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <loader supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>rom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pflash</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='readonly'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>yes</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='secure'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </loader>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </os>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='maximumMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <vendor>AMD</vendor>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='succor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='custom' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-128'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-256'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-512'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <memoryBacking supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='sourceType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>anonymous</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>memfd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </memoryBacking>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <disk supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='diskDevice'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>disk</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cdrom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>floppy</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>lun</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ide</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>fdc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>sata</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </disk>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <graphics supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vnc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egl-headless</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </graphics>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <video supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='modelType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vga</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cirrus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>none</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>bochs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ramfb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </video>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hostdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='mode'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>subsystem</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='startupPolicy'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>mandatory</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>requisite</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>optional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='subsysType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pci</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='capsType'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='pciBackend'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hostdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <rng supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>random</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </rng>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <filesystem supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='driverType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>path</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>handle</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtiofs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </filesystem>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <tpm supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-tis</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-crb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emulator</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>external</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendVersion'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>2.0</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </tpm>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <redirdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </redirdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <channel supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </channel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <crypto supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </crypto>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <interface supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>passt</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </interface>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <panic supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>isa</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>hyperv</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </panic>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <console supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>null</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dev</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pipe</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stdio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>udp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tcp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu-vdagent</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </console>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <gic supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <genid supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backup supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <async-teardown supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <ps2 supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sev supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sgx supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hyperv supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='features'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>relaxed</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vapic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>spinlocks</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vpindex</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>runtime</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>synic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stimer</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reset</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vendor_id</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>frequencies</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reenlightenment</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tlbflush</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ipi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>avic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emsr_bitmap</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>xmm_input</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hyperv>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <launchSecurity supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='sectype'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tdx</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </launchSecurity>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: </domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.096 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 26 18:09:12 np0005537197 nova_compute[188418]: <domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <domain>kvm</domain>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <arch>i686</arch>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <vcpu max='4096'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <iothreads supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <os supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='firmware'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <loader supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>rom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pflash</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='readonly'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>yes</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='secure'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </loader>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </os>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='maximumMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <vendor>AMD</vendor>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='succor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='custom' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-128'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-256'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-512'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <memoryBacking supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='sourceType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>anonymous</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>memfd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </memoryBacking>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <disk supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='diskDevice'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>disk</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cdrom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>floppy</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>lun</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>fdc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>sata</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </disk>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <graphics supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vnc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egl-headless</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </graphics>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <video supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='modelType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vga</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cirrus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>none</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>bochs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ramfb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </video>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hostdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='mode'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>subsystem</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='startupPolicy'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>mandatory</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>requisite</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>optional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='subsysType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pci</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='capsType'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='pciBackend'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hostdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <rng supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>random</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </rng>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <filesystem supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='driverType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>path</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>handle</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtiofs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </filesystem>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <tpm supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-tis</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-crb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emulator</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>external</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendVersion'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>2.0</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </tpm>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <redirdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </redirdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <channel supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </channel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <crypto supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </crypto>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <interface supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>passt</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </interface>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <panic supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>isa</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>hyperv</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </panic>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <console supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>null</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dev</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pipe</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stdio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>udp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tcp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu-vdagent</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </console>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <gic supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <genid supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backup supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <async-teardown supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <ps2 supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sev supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sgx supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hyperv supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='features'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>relaxed</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vapic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>spinlocks</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vpindex</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>runtime</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>synic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stimer</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reset</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vendor_id</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>frequencies</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reenlightenment</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tlbflush</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ipi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>avic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emsr_bitmap</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>xmm_input</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hyperv>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <launchSecurity supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='sectype'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tdx</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </launchSecurity>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: </domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.143 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.153 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 26 18:09:12 np0005537197 nova_compute[188418]: <domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <domain>kvm</domain>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <arch>x86_64</arch>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <vcpu max='240'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <iothreads supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <os supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='firmware'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <loader supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>rom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pflash</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='readonly'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>yes</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='secure'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </loader>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </os>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='maximumMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <vendor>AMD</vendor>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='succor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='custom' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-128'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-256'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-512'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <memoryBacking supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='sourceType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>anonymous</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>memfd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </memoryBacking>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <disk supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='diskDevice'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>disk</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cdrom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>floppy</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>lun</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ide</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>fdc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>sata</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </disk>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <graphics supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vnc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egl-headless</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </graphics>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <video supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='modelType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vga</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cirrus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>none</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>bochs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ramfb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </video>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hostdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='mode'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>subsystem</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='startupPolicy'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>mandatory</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>requisite</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>optional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='subsysType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pci</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='capsType'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='pciBackend'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hostdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <rng supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>random</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </rng>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <filesystem supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='driverType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>path</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>handle</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtiofs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </filesystem>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <tpm supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-tis</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-crb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emulator</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>external</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendVersion'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>2.0</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </tpm>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <redirdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </redirdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <channel supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </channel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <crypto supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </crypto>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <interface supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>passt</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </interface>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <panic supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>isa</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>hyperv</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </panic>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <console supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>null</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dev</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pipe</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stdio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>udp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tcp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu-vdagent</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </console>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <gic supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <genid supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backup supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <async-teardown supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <ps2 supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sev supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sgx supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hyperv supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='features'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>relaxed</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vapic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>spinlocks</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vpindex</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>runtime</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>synic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stimer</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reset</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vendor_id</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>frequencies</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reenlightenment</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tlbflush</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ipi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>avic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emsr_bitmap</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>xmm_input</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hyperv>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <launchSecurity supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='sectype'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tdx</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </launchSecurity>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: </domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.216 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 26 18:09:12 np0005537197 nova_compute[188418]: <domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <domain>kvm</domain>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <arch>x86_64</arch>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <vcpu max='4096'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <iothreads supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <os supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='firmware'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>efi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <loader supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>rom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pflash</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='readonly'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>yes</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='secure'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>yes</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>no</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </loader>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </os>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='maximumMigratable'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>on</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>off</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <vendor>AMD</vendor>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='succor'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <mode name='custom' supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Denverton-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='auto-ibrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amd-psfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='stibp-always-on'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='EPYC-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-128'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-256'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx10-512'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='prefetchiti'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Haswell-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512er'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512pf'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fma4'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tbm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xop'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='amx-tile'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-bf16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-fp16'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bitalg'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrc'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fzrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='la57'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='taa-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xfd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ifma'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cmpccxadd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fbsdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='fsrs'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ibrs-all'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mcdt-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pbrsb-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='psdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='serialize'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vaes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='hle'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='rtm'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512bw'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512cd'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512dq'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512f'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='avx512vl'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='invpcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pcid'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='pku'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='mpx'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='core-capability'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='split-lock-detect'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='cldemote'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='erms'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='gfni'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdir64b'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='movdiri'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='xsaves'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='athlon-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='core2duo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='coreduo-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='n270-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='ss'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <blockers model='phenom-v1'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnow'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <feature name='3dnowext'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </blockers>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </mode>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </cpu>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <memoryBacking supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <enum name='sourceType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>anonymous</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <value>memfd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </memoryBacking>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <disk supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='diskDevice'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>disk</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cdrom</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>floppy</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>lun</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>fdc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>sata</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </disk>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <graphics supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vnc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egl-headless</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </graphics>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <video supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='modelType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vga</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>cirrus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>none</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>bochs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ramfb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </video>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hostdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='mode'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>subsystem</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='startupPolicy'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>mandatory</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>requisite</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>optional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='subsysType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pci</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>scsi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='capsType'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='pciBackend'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hostdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <rng supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtio-non-transitional</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>random</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>egd</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </rng>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <filesystem supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='driverType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>path</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>handle</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>virtiofs</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </filesystem>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <tpm supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-tis</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tpm-crb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emulator</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>external</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendVersion'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>2.0</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </tpm>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <redirdev supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='bus'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>usb</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </redirdev>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <channel supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </channel>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <crypto supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendModel'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>builtin</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </crypto>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <interface supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='backendType'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>default</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>passt</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </interface>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <panic supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='model'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>isa</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>hyperv</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </panic>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <console supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='type'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>null</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vc</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pty</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dev</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>file</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>pipe</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stdio</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>udp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tcp</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>unix</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>qemu-vdagent</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>dbus</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </console>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </devices>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  <features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <gic supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <genid supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <backup supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <async-teardown supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <ps2 supported='yes'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sev supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <sgx supported='no'/>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <hyperv supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='features'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>relaxed</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vapic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>spinlocks</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vpindex</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>runtime</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>synic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>stimer</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reset</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>vendor_id</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>frequencies</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>reenlightenment</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tlbflush</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>ipi</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>avic</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>emsr_bitmap</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>xmm_input</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </defaults>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </hyperv>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    <launchSecurity supported='yes'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      <enum name='sectype'>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:        <value>tdx</value>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:      </enum>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:    </launchSecurity>
Nov 26 18:09:12 np0005537197 nova_compute[188418]:  </features>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: </domainCapabilities>
Nov 26 18:09:12 np0005537197 nova_compute[188418]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.276 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.277 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.277 188422 DEBUG nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.277 188422 INFO nova.virt.libvirt.host [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Secure Boot support detected#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.280 188422 INFO nova.virt.libvirt.driver [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.280 188422 INFO nova.virt.libvirt.driver [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.298 188422 DEBUG nova.virt.libvirt.driver [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.341 188422 INFO nova.virt.node [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Determined node identity de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from /var/lib/nova/compute_id#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.359 188422 WARNING nova.compute.manager [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Compute nodes ['de65df0c-bd6c-4ecc-b0a9-30ae4314ce78'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.398 188422 INFO nova.compute.manager [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.434 188422 WARNING nova.compute.manager [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.434 188422 DEBUG oslo_concurrency.lockutils [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.435 188422 DEBUG oslo_concurrency.lockutils [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.435 188422 DEBUG oslo_concurrency.lockutils [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.435 188422 DEBUG nova.compute.resource_tracker [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 18:09:12 np0005537197 systemd[1]: Starting libvirt nodedev daemon...
Nov 26 18:09:12 np0005537197 systemd[1]: Started libvirt nodedev daemon.
Nov 26 18:09:12 np0005537197 podman[189201]: 2025-11-26 23:09:12.627596506 +0000 UTC m=+0.118022822 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.784 188422 WARNING nova.virt.libvirt.driver [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.786 188422 DEBUG nova.compute.resource_tracker [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6053MB free_disk=72.6102180480957GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.786 188422 DEBUG oslo_concurrency.lockutils [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.786 188422 DEBUG oslo_concurrency.lockutils [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.800 188422 WARNING nova.compute.resource_tracker [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] No compute node record for compute-0.ctlplane.example.com:de65df0c-bd6c-4ecc-b0a9-30ae4314ce78: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 could not be found.#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.820 188422 INFO nova.compute.resource_tracker [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.883 188422 DEBUG nova.compute.resource_tracker [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 18:09:12 np0005537197 nova_compute[188418]: 2025-11-26 23:09:12.884 188422 DEBUG nova.compute.resource_tracker [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 18:09:13 np0005537197 python3.9[189326]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:09:13 np0005537197 systemd[1]: Stopping nova_compute container...
Nov 26 18:09:13 np0005537197 nova_compute[188418]: 2025-11-26 23:09:13.164 188422 DEBUG oslo_concurrency.lockutils [None req-d6e58616-ef36-49b8-ac4c-690bd12e6969 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:09:13 np0005537197 nova_compute[188418]: 2025-11-26 23:09:13.164 188422 DEBUG oslo_concurrency.lockutils [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 18:09:13 np0005537197 nova_compute[188418]: 2025-11-26 23:09:13.165 188422 DEBUG oslo_concurrency.lockutils [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 18:09:13 np0005537197 nova_compute[188418]: 2025-11-26 23:09:13.165 188422 DEBUG oslo_concurrency.lockutils [None req-f9043e3d-9af5-4f14-bda5-20139a245503 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 18:09:13 np0005537197 virtqemud[188953]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 26 18:09:13 np0005537197 virtqemud[188953]: hostname: compute-0
Nov 26 18:09:13 np0005537197 virtqemud[188953]: End of file while reading data: Input/output error
Nov 26 18:09:13 np0005537197 systemd[1]: libpod-020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19.scope: Deactivated successfully.
Nov 26 18:09:13 np0005537197 systemd[1]: libpod-020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19.scope: Consumed 3.041s CPU time.
Nov 26 18:09:13 np0005537197 podman[189330]: 2025-11-26 23:09:13.515668613 +0000 UTC m=+0.399261950 container died 020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:09:13 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19-userdata-shm.mount: Deactivated successfully.
Nov 26 18:09:13 np0005537197 systemd[1]: var-lib-containers-storage-overlay-ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf-merged.mount: Deactivated successfully.
Nov 26 18:09:13 np0005537197 podman[189330]: 2025-11-26 23:09:13.589238319 +0000 UTC m=+0.472831656 container cleanup 020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:09:13 np0005537197 podman[189330]: nova_compute
Nov 26 18:09:13 np0005537197 podman[189359]: nova_compute
Nov 26 18:09:13 np0005537197 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 26 18:09:13 np0005537197 systemd[1]: Stopped nova_compute container.
Nov 26 18:09:13 np0005537197 systemd[1]: Starting nova_compute container...
Nov 26 18:09:13 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:09:13 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:13 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:13 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:13 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:13 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff1fe25b828556e1b57261f35fdf806cc1a19c1ccf38b099d6d0267d6f2e77bf/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:13 np0005537197 podman[189372]: 2025-11-26 23:09:13.886518013 +0000 UTC m=+0.143990194 container init 020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Nov 26 18:09:13 np0005537197 podman[189372]: 2025-11-26 23:09:13.896733976 +0000 UTC m=+0.154206117 container start 020019830bcf75bc086f375602c38352ca3a81fbe13eab2ae08d6da7f49d7d19 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 18:09:13 np0005537197 podman[189372]: nova_compute
Nov 26 18:09:13 np0005537197 nova_compute[189387]: + sudo -E kolla_set_configs
Nov 26 18:09:13 np0005537197 systemd[1]: Started nova_compute container.
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Validating config file
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying service configuration files
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /etc/ceph
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Creating directory /etc/ceph
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /etc/ceph
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Writing out command to execute
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:14 np0005537197 nova_compute[189387]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 26 18:09:14 np0005537197 nova_compute[189387]: ++ cat /run_command
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + CMD=nova-compute
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + ARGS=
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + sudo kolla_copy_cacerts
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + [[ ! -n '' ]]
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + . kolla_extend_start
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + echo 'Running command: '\''nova-compute'\'''
Nov 26 18:09:14 np0005537197 nova_compute[189387]: Running command: 'nova-compute'
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + umask 0022
Nov 26 18:09:14 np0005537197 nova_compute[189387]: + exec nova-compute
Nov 26 18:09:15 np0005537197 python3.9[189551]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 26 18:09:15 np0005537197 systemd[1]: Started libpod-conmon-6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c.scope.
Nov 26 18:09:15 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:09:15 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7f68f820931f5db1043c50c2458ee86b4fd9ea35536fe311ff8a1ec59a7993/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:15 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7f68f820931f5db1043c50c2458ee86b4fd9ea35536fe311ff8a1ec59a7993/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:15 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7f68f820931f5db1043c50c2458ee86b4fd9ea35536fe311ff8a1ec59a7993/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 26 18:09:15 np0005537197 podman[189576]: 2025-11-26 23:09:15.434488146 +0000 UTC m=+0.201837643 container init 6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 18:09:15 np0005537197 podman[189576]: 2025-11-26 23:09:15.444302348 +0000 UTC m=+0.211651795 container start 6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 18:09:15 np0005537197 python3.9[189551]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Applying nova statedir ownership
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 26 18:09:15 np0005537197 nova_compute_init[189611]: INFO:nova_statedir:Nova statedir ownership complete
Nov 26 18:09:15 np0005537197 systemd[1]: libpod-6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c.scope: Deactivated successfully.
Nov 26 18:09:15 np0005537197 podman[189595]: 2025-11-26 23:09:15.541577532 +0000 UTC m=+0.118914767 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 18:09:15 np0005537197 podman[189631]: 2025-11-26 23:09:15.596679603 +0000 UTC m=+0.043103178 container died 6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 18:09:15 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c-userdata-shm.mount: Deactivated successfully.
Nov 26 18:09:15 np0005537197 systemd[1]: var-lib-containers-storage-overlay-7c7f68f820931f5db1043c50c2458ee86b4fd9ea35536fe311ff8a1ec59a7993-merged.mount: Deactivated successfully.
Nov 26 18:09:15 np0005537197 podman[189631]: 2025-11-26 23:09:15.645873182 +0000 UTC m=+0.092296777 container cleanup 6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251125)
Nov 26 18:09:15 np0005537197 systemd[1]: libpod-conmon-6223bfc8a085b2f3ffbc5ee0176f014d22d6831007810a5215ac6c12a5f0576c.scope: Deactivated successfully.
Nov 26 18:09:15 np0005537197 nova_compute[189387]: 2025-11-26 23:09:15.961 189391 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 18:09:15 np0005537197 nova_compute[189387]: 2025-11-26 23:09:15.962 189391 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 18:09:15 np0005537197 nova_compute[189387]: 2025-11-26 23:09:15.962 189391 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 26 18:09:15 np0005537197 nova_compute[189387]: 2025-11-26 23:09:15.962 189391 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.092 189391 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.115 189391 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.115 189391 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 26 18:09:16 np0005537197 systemd[1]: session-24.scope: Deactivated successfully.
Nov 26 18:09:16 np0005537197 systemd[1]: session-24.scope: Consumed 2min 15.226s CPU time.
Nov 26 18:09:16 np0005537197 systemd-logind[819]: Session 24 logged out. Waiting for processes to exit.
Nov 26 18:09:16 np0005537197 systemd-logind[819]: Removed session 24.
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.565 189391 INFO nova.virt.driver [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.688 189391 INFO nova.compute.provider_config [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.706 189391 DEBUG oslo_concurrency.lockutils [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.707 189391 DEBUG oslo_concurrency.lockutils [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.708 189391 DEBUG oslo_concurrency.lockutils [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.708 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.709 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.709 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.709 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.709 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.710 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.710 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.710 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.710 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.711 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.711 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.711 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.711 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.712 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.712 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.712 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.712 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.713 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.713 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.713 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.713 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.713 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.714 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.714 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.714 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.715 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.715 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.715 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.716 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.716 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.716 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.716 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.717 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.717 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.717 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.718 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.718 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.718 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.718 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.719 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.719 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.719 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.719 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.720 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.720 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.720 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.720 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.721 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.721 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.721 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.722 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.722 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.722 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.722 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.723 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.723 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.723 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.723 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.724 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.724 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.724 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.724 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.725 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.725 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.725 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.725 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.725 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.726 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.726 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.726 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.727 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.727 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.727 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.727 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.728 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.728 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.728 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.728 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.729 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.729 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.729 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.729 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.729 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.730 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.730 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.730 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.730 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.730 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.731 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.731 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.731 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.731 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.732 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.732 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.732 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.732 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.732 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.733 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.733 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.733 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.733 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.734 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.734 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.734 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.734 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.735 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.735 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.735 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.736 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.736 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.736 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.736 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.737 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.737 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.737 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.737 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.737 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.738 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.738 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.738 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.738 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.738 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.739 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.739 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.739 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.739 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.740 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.740 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.740 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.741 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.741 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.741 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.742 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.742 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.742 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.743 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.743 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.743 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.743 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.744 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.744 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.744 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.744 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.744 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.744 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.744 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.745 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.745 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.745 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.745 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.745 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.745 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.745 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.746 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.746 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.746 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.746 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.746 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.746 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.746 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.747 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.747 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.747 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.747 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.747 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.747 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.747 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.748 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.749 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.749 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.749 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.749 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.749 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.749 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.750 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.751 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.751 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.751 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.751 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.751 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.751 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.751 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.752 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.752 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.752 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.752 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.752 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.752 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.752 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.753 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.753 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.753 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.753 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.753 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.753 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.753 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.754 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.754 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.754 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.754 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.754 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.754 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.754 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.755 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.755 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.755 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.755 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.755 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.755 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.755 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.756 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.757 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.757 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.757 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.757 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.757 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.757 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.757 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.758 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.758 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.758 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.758 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.758 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.758 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.758 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.759 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.759 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.759 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.759 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.759 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.759 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.759 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.760 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.760 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.760 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.760 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.760 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.760 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.760 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.761 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.761 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.761 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.761 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.761 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.761 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.761 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.762 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.762 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.762 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.762 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.762 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.762 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.762 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.763 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.763 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.763 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.763 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.763 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.763 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.763 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.764 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.764 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.764 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.764 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.764 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.764 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.764 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.765 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.766 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.766 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.766 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.766 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.766 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.766 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.766 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.767 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.767 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.767 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.767 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.767 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.767 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.767 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.768 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.768 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.768 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.768 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.768 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.768 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.768 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.769 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.770 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.770 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.770 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.770 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.770 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.770 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.770 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.771 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.771 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.771 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.771 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.771 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.771 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.771 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.772 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.772 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.772 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.772 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.772 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.772 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.772 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.773 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.774 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.774 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.774 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.774 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.774 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.775 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.775 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.775 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.775 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.775 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.775 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.775 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.776 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.776 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.776 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.776 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.776 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.776 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.776 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.777 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.778 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.778 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.778 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.778 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.778 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.778 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.778 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.779 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.779 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.779 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.779 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.779 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.779 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.780 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.781 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.781 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.781 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.781 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.781 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.781 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.781 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.782 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.782 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.782 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.782 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.782 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.782 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.782 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.783 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.783 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.783 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.783 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.783 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.783 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.783 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.784 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.785 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.785 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.785 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.785 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.785 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.785 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.785 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.786 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.786 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.786 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.786 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.786 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.786 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.786 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.787 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.787 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.787 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.787 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.787 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.787 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.787 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.788 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.788 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.788 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.788 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.788 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.788 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.788 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.789 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.789 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.789 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.789 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.789 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.789 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.789 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.790 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.790 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.790 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.790 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.790 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.790 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.790 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.791 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.791 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.791 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.791 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.791 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.791 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.791 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.792 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.792 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.792 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.792 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.792 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.792 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.792 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.793 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.793 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.793 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.793 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.793 189391 WARNING oslo_config.cfg [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 26 18:09:16 np0005537197 nova_compute[189387]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 26 18:09:16 np0005537197 nova_compute[189387]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 26 18:09:16 np0005537197 nova_compute[189387]: and ``live_migration_inbound_addr`` respectively.
Nov 26 18:09:16 np0005537197 nova_compute[189387]: ).  Its value may be silently ignored in the future.#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.793 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.794 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.794 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.794 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.794 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.794 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.794 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.795 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.795 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.795 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.795 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.795 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.795 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.795 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.796 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.796 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.796 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.796 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.796 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.796 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.796 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.797 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.797 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.797 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.797 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.797 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.797 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.797 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.798 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.798 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.798 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.798 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.798 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.798 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.798 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.799 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.799 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.799 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.799 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.799 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.799 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.799 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.800 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.800 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.800 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.800 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.800 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.800 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.800 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.801 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.801 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.801 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.801 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.801 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.801 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.801 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.802 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.802 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.802 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.802 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.802 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.802 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.802 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.803 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.803 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.803 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.803 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.803 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.803 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.803 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.804 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.805 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.805 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.805 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.805 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.805 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.805 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.806 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.806 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.806 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.806 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.806 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.806 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.806 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.807 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.807 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.807 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.807 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.807 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.807 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.807 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.808 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.808 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.808 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.808 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.808 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.808 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.808 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.809 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.809 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.809 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.809 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.809 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.809 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.809 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.810 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.811 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.811 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.811 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.811 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.811 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.811 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.812 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.812 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.812 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.812 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.812 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.812 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.812 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.813 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.813 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.813 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.813 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.813 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.813 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.813 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.814 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.814 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.814 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.814 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.814 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.814 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.814 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.815 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.815 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.815 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.815 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.815 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.815 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.816 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.816 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.816 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.816 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.816 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.816 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.816 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.817 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.817 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.817 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.817 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.817 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.817 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.817 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.818 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.819 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.819 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.819 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.819 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.819 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.820 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.820 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.820 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.820 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.820 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.820 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.820 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.821 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.821 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.821 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.821 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.821 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.821 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.821 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.822 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.822 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.822 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.822 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.822 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.822 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.823 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.823 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.823 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.823 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.823 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.823 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.823 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.824 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.825 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.825 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.825 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.825 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.825 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.825 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.825 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.826 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.826 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.826 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.826 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.826 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.826 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.826 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.827 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.827 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.827 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.827 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.827 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.827 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.827 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.828 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.829 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.829 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.829 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.829 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.829 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.829 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.830 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.830 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.830 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.830 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.830 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.830 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.831 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.831 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.831 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.831 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.831 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.831 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.831 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.832 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.833 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.833 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.833 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.833 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.833 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.833 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.833 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.834 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.834 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.834 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.834 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.834 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.834 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.834 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.835 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.835 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.835 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.835 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.835 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.835 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.835 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.836 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.836 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.836 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.836 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.836 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.836 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.836 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.837 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.837 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.837 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.837 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.837 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.837 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.837 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.838 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.838 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.838 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.838 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.838 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.838 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.838 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.839 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.839 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.839 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.839 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.839 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.839 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.839 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.840 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.840 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.840 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.840 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.840 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.840 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.840 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.841 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.841 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.841 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.841 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.841 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.841 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.841 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.842 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.842 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.842 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.842 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.842 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.842 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.842 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.843 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.843 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.843 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.843 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.843 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.843 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.844 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.845 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.845 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.845 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.845 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.845 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.845 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.845 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.846 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.846 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.846 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.846 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.846 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.846 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.846 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.847 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.847 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.847 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.847 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.847 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.847 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.847 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.848 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.848 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.848 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.848 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.848 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.848 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.848 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.849 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.850 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.850 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.850 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.850 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.850 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.850 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.850 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.851 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.851 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.851 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.851 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.851 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.851 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.851 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.852 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.852 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.852 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.852 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.852 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.852 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.852 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.853 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.853 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.853 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.853 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.853 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.853 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.853 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.854 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.854 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.854 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.854 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.854 189391 DEBUG oslo_service.service [None req-11664917-7ed0-41f7-a531-3c3fb906ee61 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.855 189391 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.869 189391 INFO nova.virt.node [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Determined node identity de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from /var/lib/nova/compute_id#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.870 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.870 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.871 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.871 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.887 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f9269a7a070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.890 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f9269a7a070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.892 189391 INFO nova.virt.libvirt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.904 189391 INFO nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Libvirt host capabilities <capabilities>
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <host>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <uuid>d7e69efc-d84d-4224-8bbd-5fd303612f05</uuid>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <cpu>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <arch>x86_64</arch>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model>EPYC-Rome-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <vendor>AMD</vendor>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <microcode version='16777317'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <signature family='23' model='49' stepping='0'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='x2apic'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='tsc-deadline'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='osxsave'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='hypervisor'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='tsc_adjust'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='spec-ctrl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='stibp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='arch-capabilities'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='cmp_legacy'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='topoext'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='virt-ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='lbrv'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='tsc-scale'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='vmcb-clean'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='pause-filter'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='pfthreshold'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='svme-addr-chk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='rdctl-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='skip-l1dfl-vmentry'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='mds-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature name='pschange-mc-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <pages unit='KiB' size='4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <pages unit='KiB' size='2048'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <pages unit='KiB' size='1048576'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </cpu>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <power_management>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <suspend_mem/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <suspend_disk/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <suspend_hybrid/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </power_management>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <iommu support='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <migration_features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <live/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <uri_transports>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <uri_transport>tcp</uri_transport>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <uri_transport>rdma</uri_transport>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </uri_transports>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </migration_features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <topology>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <cells num='1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <cell id='0'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          <memory unit='KiB'>7864324</memory>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          <pages unit='KiB' size='4'>1966081</pages>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          <pages unit='KiB' size='2048'>0</pages>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          <distances>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <sibling id='0' value='10'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          </distances>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          <cpus num='8'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:          </cpus>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        </cell>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </cells>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </topology>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <cache>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </cache>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <secmodel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model>selinux</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <doi>0</doi>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </secmodel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <secmodel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model>dac</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <doi>0</doi>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </secmodel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </host>
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <guest>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <os_type>hvm</os_type>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <arch name='i686'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <wordsize>32</wordsize>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <domain type='qemu'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <domain type='kvm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </arch>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <pae/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <nonpae/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <acpi default='on' toggle='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <apic default='on' toggle='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <cpuselection/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <deviceboot/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <disksnapshot default='on' toggle='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <externalSnapshot/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </guest>
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <guest>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <os_type>hvm</os_type>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <arch name='x86_64'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <wordsize>64</wordsize>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <domain type='qemu'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <domain type='kvm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </arch>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <acpi default='on' toggle='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <apic default='on' toggle='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <cpuselection/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <deviceboot/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <disksnapshot default='on' toggle='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <externalSnapshot/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </guest>
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 
Nov 26 18:09:16 np0005537197 nova_compute[189387]: </capabilities>
Nov 26 18:09:16 np0005537197 nova_compute[189387]: #033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.912 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.916 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 26 18:09:16 np0005537197 nova_compute[189387]: <domainCapabilities>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <domain>kvm</domain>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <arch>i686</arch>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <vcpu max='240'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <iothreads supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <os supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <enum name='firmware'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <loader supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>rom</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>pflash</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='readonly'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>yes</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='secure'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </loader>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </os>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <cpu>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='maximumMigratable'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <vendor>AMD</vendor>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='succor'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='custom' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx10'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx10-128'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx10-256'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx10-512'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='SierraForest'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Snowridge'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='athlon'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='athlon-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='core2duo'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='core2duo-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='coreduo'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='coreduo-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='n270'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='n270-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='phenom'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='phenom-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </cpu>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <memoryBacking supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <enum name='sourceType'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <value>file</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <value>anonymous</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <value>memfd</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </memoryBacking>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <devices>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <disk supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='diskDevice'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>disk</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>cdrom</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>floppy</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>lun</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>ide</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>fdc</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>sata</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </disk>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <graphics supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>vnc</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>egl-headless</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </graphics>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <video supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='modelType'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>vga</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>cirrus</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>none</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>bochs</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>ramfb</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </video>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <hostdev supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='mode'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>subsystem</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='startupPolicy'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>mandatory</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>requisite</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>optional</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='subsysType'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>pci</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='capsType'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='pciBackend'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </hostdev>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <rng supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>random</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>egd</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </rng>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <filesystem supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='driverType'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>path</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>handle</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>virtiofs</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </filesystem>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <tpm supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>tpm-tis</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>tpm-crb</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>emulator</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>external</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='backendVersion'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>2.0</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </tpm>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <redirdev supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </redirdev>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <channel supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </channel>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <crypto supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='model'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>qemu</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </crypto>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <interface supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='backendType'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>passt</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </interface>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <panic supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>isa</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>hyperv</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </panic>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <console supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>null</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>vc</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>dev</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>file</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>pipe</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>stdio</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>udp</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>tcp</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>qemu-vdagent</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </console>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </devices>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <gic supported='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <genid supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <backup supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <async-teardown supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <ps2 supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <sev supported='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <sgx supported='no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <hyperv supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='features'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>relaxed</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>vapic</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>spinlocks</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>vpindex</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>runtime</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>synic</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>stimer</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>reset</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>vendor_id</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>frequencies</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>reenlightenment</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>tlbflush</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>ipi</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>avic</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>emsr_bitmap</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>xmm_input</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <defaults>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </defaults>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </hyperv>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <launchSecurity supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='sectype'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>tdx</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </launchSecurity>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </features>
Nov 26 18:09:16 np0005537197 nova_compute[189387]: </domainCapabilities>
Nov 26 18:09:16 np0005537197 nova_compute[189387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.924 189391 DEBUG nova.virt.libvirt.volume.mount [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 26 18:09:16 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.928 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 26 18:09:16 np0005537197 nova_compute[189387]: <domainCapabilities>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <domain>kvm</domain>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <arch>i686</arch>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <vcpu max='4096'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <iothreads supported='yes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <os supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <enum name='firmware'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <loader supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>rom</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>pflash</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='readonly'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>yes</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='secure'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </loader>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  </os>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:  <cpu>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <enum name='maximumMigratable'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <vendor>AMD</vendor>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='succor'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:    <mode name='custom' supported='yes'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v3'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:16 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-128'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-256'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-512'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SierraForest'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='athlon'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='athlon-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='core2duo'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='core2duo-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='coreduo'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='coreduo-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='n270'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='n270-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='phenom'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='phenom-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </cpu>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <memoryBacking supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <enum name='sourceType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>file</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>anonymous</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>memfd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </memoryBacking>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <devices>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <disk supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='diskDevice'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>disk</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>cdrom</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>floppy</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>lun</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>fdc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>sata</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </disk>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <graphics supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vnc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>egl-headless</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </graphics>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <video supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='modelType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vga</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>cirrus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>none</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>bochs</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>ramfb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </video>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <hostdev supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='mode'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>subsystem</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='startupPolicy'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>mandatory</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>requisite</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>optional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='subsysType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pci</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='capsType'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='pciBackend'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </hostdev>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <rng supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>random</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>egd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </rng>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <filesystem supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='driverType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>path</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>handle</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtiofs</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </filesystem>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <tpm supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tpm-tis</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tpm-crb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>emulator</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>external</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendVersion'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>2.0</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </tpm>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <redirdev supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </redirdev>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <channel supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </channel>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <crypto supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>qemu</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </crypto>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <interface supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>passt</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </interface>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <panic supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>isa</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>hyperv</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </panic>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <console supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>null</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dev</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>file</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pipe</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>stdio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>udp</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tcp</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>qemu-vdagent</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </console>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </devices>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <features>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <gic supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <genid supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <backup supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <async-teardown supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <ps2 supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <sev supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <sgx supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <hyperv supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='features'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>relaxed</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vapic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>spinlocks</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vpindex</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>runtime</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>synic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>stimer</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>reset</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vendor_id</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>frequencies</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>reenlightenment</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tlbflush</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>ipi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>avic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>emsr_bitmap</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>xmm_input</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <defaults>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </defaults>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </hyperv>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <launchSecurity supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='sectype'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tdx</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </launchSecurity>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </features>
Nov 26 18:09:17 np0005537197 nova_compute[189387]: </domainCapabilities>
Nov 26 18:09:17 np0005537197 nova_compute[189387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.960 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:16.967 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 26 18:09:17 np0005537197 nova_compute[189387]: <domainCapabilities>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <domain>kvm</domain>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <arch>x86_64</arch>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <vcpu max='240'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <iothreads supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <os supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <enum name='firmware'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <loader supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>rom</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pflash</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='readonly'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>yes</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='secure'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </loader>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </os>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <cpu>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='maximumMigratable'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <vendor>AMD</vendor>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='succor'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='custom' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-128'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-256'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-512'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SierraForest'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='athlon'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='athlon-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='core2duo'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='core2duo-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='coreduo'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='coreduo-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='n270'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='n270-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='phenom'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='phenom-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </cpu>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <memoryBacking supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <enum name='sourceType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>file</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>anonymous</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>memfd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </memoryBacking>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <devices>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <disk supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='diskDevice'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>disk</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>cdrom</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>floppy</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>lun</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>ide</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>fdc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>sata</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </disk>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <graphics supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vnc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>egl-headless</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </graphics>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <video supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='modelType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vga</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>cirrus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>none</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>bochs</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>ramfb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </video>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <hostdev supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='mode'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>subsystem</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='startupPolicy'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>mandatory</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>requisite</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>optional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='subsysType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pci</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='capsType'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='pciBackend'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </hostdev>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <rng supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>random</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>egd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </rng>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <filesystem supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='driverType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>path</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>handle</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtiofs</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </filesystem>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <tpm supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tpm-tis</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tpm-crb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>emulator</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>external</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendVersion'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>2.0</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </tpm>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <redirdev supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </redirdev>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <channel supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </channel>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <crypto supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>qemu</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </crypto>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <interface supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>passt</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </interface>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <panic supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>isa</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>hyperv</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </panic>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <console supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>null</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dev</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>file</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pipe</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>stdio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>udp</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tcp</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>qemu-vdagent</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </console>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </devices>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <features>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <gic supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <genid supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <backup supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <async-teardown supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <ps2 supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <sev supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <sgx supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <hyperv supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='features'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>relaxed</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vapic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>spinlocks</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vpindex</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>runtime</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>synic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>stimer</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>reset</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vendor_id</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>frequencies</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>reenlightenment</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tlbflush</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>ipi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>avic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>emsr_bitmap</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>xmm_input</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <defaults>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </defaults>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </hyperv>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <launchSecurity supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='sectype'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tdx</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </launchSecurity>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </features>
Nov 26 18:09:17 np0005537197 nova_compute[189387]: </domainCapabilities>
Nov 26 18:09:17 np0005537197 nova_compute[189387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.016 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 26 18:09:17 np0005537197 nova_compute[189387]: <domainCapabilities>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <path>/usr/libexec/qemu-kvm</path>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <domain>kvm</domain>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <arch>x86_64</arch>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <vcpu max='4096'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <iothreads supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <os supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <enum name='firmware'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>efi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <loader supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>rom</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pflash</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='readonly'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>yes</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='secure'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>yes</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>no</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </loader>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </os>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <cpu>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='host-passthrough' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='hostPassthroughMigratable'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='maximum' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='maximumMigratable'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>on</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>off</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='host-model' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <vendor>AMD</vendor>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='x2apic'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-deadline'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='hypervisor'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc_adjust'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='spec-ctrl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='stibp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='ssbd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='cmp_legacy'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='overflow-recov'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='succor'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='ibrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='amd-ssbd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='virt-ssbd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='lbrv'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='tsc-scale'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='vmcb-clean'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='flushbyasid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='pause-filter'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='pfthreshold'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='svme-addr-chk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <feature policy='disable' name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <mode name='custom' supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Broadwell-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cascadelake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Cooperlake-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Denverton-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Dhyana-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Genoa-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='auto-ibrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Milan-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amd-psfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='no-nested-data-bp'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='null-sel-clr-base'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='stibp-always-on'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-Rome-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='EPYC-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='GraniteRapids-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-128'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-256'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx10-512'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='prefetchiti'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Haswell-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-noTSX'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v6'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Icelake-Server-v7'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='IvyBridge-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='KnightsMill-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4fmaps'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-4vnniw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512er'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512pf'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G4-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Opteron_G5-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fma4'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tbm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xop'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SapphireRapids-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='amx-tile'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-bf16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-fp16'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512-vpopcntdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bitalg'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vbmi2'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrc'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fzrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='la57'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='taa-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='tsx-ldtrk'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xfd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SierraForest'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='SierraForest-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ifma'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-ne-convert'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx-vnni-int8'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='bus-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cmpccxadd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fbsdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='fsrs'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ibrs-all'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mcdt-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pbrsb-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='psdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='sbdr-ssdp-no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='serialize'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vaes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='vpclmulqdq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Client-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='hle'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='rtm'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Skylake-Server-v5'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512bw'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512cd'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512dq'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512f'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='avx512vl'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='invpcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pcid'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='pku'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='mpx'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v2'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v3'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='core-capability'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='split-lock-detect'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='Snowridge-v4'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='cldemote'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='erms'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='gfni'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdir64b'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='movdiri'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='xsaves'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='athlon'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='athlon-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='core2duo'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='core2duo-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='coreduo'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='coreduo-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='n270'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='n270-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='ss'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='phenom'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <blockers model='phenom-v1'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnow'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <feature name='3dnowext'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </blockers>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </mode>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </cpu>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <memoryBacking supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <enum name='sourceType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>file</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>anonymous</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <value>memfd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </memoryBacking>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <devices>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <disk supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='diskDevice'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>disk</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>cdrom</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>floppy</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>lun</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>fdc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>sata</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </disk>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <graphics supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vnc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>egl-headless</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </graphics>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <video supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='modelType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vga</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>cirrus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>none</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>bochs</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>ramfb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </video>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <hostdev supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='mode'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>subsystem</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='startupPolicy'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>mandatory</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>requisite</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>optional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='subsysType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pci</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>scsi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='capsType'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='pciBackend'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </hostdev>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <rng supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtio-non-transitional</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>random</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>egd</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </rng>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <filesystem supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='driverType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>path</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>handle</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>virtiofs</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </filesystem>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <tpm supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tpm-tis</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tpm-crb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>emulator</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>external</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendVersion'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>2.0</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </tpm>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <redirdev supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='bus'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>usb</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </redirdev>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <channel supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </channel>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <crypto supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>qemu</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendModel'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>builtin</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </crypto>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <interface supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='backendType'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>default</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>passt</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </interface>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <panic supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='model'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>isa</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>hyperv</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </panic>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <console supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='type'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>null</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vc</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pty</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dev</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>file</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>pipe</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>stdio</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>udp</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tcp</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>unix</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>qemu-vdagent</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>dbus</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </console>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </devices>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  <features>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <gic supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <vmcoreinfo supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <genid supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <backingStoreInput supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <backup supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <async-teardown supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <ps2 supported='yes'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <sev supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <sgx supported='no'/>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <hyperv supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='features'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>relaxed</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vapic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>spinlocks</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vpindex</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>runtime</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>synic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>stimer</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>reset</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>vendor_id</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>frequencies</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>reenlightenment</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tlbflush</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>ipi</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>avic</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>emsr_bitmap</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>xmm_input</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <defaults>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <spinlocks>4095</spinlocks>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <stimer_direct>on</stimer_direct>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <tlbflush_direct>on</tlbflush_direct>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <tlbflush_extended>on</tlbflush_extended>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </defaults>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </hyperv>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    <launchSecurity supported='yes'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      <enum name='sectype'>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:        <value>tdx</value>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:      </enum>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:    </launchSecurity>
Nov 26 18:09:17 np0005537197 nova_compute[189387]:  </features>
Nov 26 18:09:17 np0005537197 nova_compute[189387]: </domainCapabilities>
Nov 26 18:09:17 np0005537197 nova_compute[189387]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.084 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.084 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.084 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.084 189391 INFO nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Secure Boot support detected#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.086 189391 INFO nova.virt.libvirt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.086 189391 INFO nova.virt.libvirt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.096 189391 DEBUG nova.virt.libvirt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.128 189391 INFO nova.virt.node [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Determined node identity de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from /var/lib/nova/compute_id#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.148 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Verified node de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.184 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.667 189391 ERROR nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Could not retrieve compute node resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'de65df0c-bd6c-4ecc-b0a9-30ae4314ce78' not found: No resource provider with uuid de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 found  ", "request_id": "req-da92ba8d-7ad7-403c-91ec-d4eff21c9e80"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'de65df0c-bd6c-4ecc-b0a9-30ae4314ce78' not found: No resource provider with uuid de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 found  ", "request_id": "req-da92ba8d-7ad7-403c-91ec-d4eff21c9e80"}]}#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.695 189391 DEBUG oslo_concurrency.lockutils [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.695 189391 DEBUG oslo_concurrency.lockutils [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.696 189391 DEBUG oslo_concurrency.lockutils [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.696 189391 DEBUG nova.compute.resource_tracker [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.924 189391 WARNING nova.virt.libvirt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.925 189391 DEBUG nova.compute.resource_tracker [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6052MB free_disk=72.60873031616211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.925 189391 DEBUG oslo_concurrency.lockutils [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:09:17 np0005537197 nova_compute[189387]: 2025-11-26 23:09:17.926 189391 DEBUG oslo_concurrency.lockutils [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.135 189391 ERROR nova.compute.resource_tracker [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'de65df0c-bd6c-4ecc-b0a9-30ae4314ce78' not found: No resource provider with uuid de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 found  ", "request_id": "req-3b40ce43-99d5-4663-b675-013ad4bb29fb"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'de65df0c-bd6c-4ecc-b0a9-30ae4314ce78' not found: No resource provider with uuid de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 found  ", "request_id": "req-3b40ce43-99d5-4663-b675-013ad4bb29fb"}]}#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.136 189391 DEBUG nova.compute.resource_tracker [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.136 189391 DEBUG nova.compute.resource_tracker [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.542 189391 INFO nova.scheduler.client.report [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [req-79967c8a-2346-4c6b-93e2-59a39ddb2a31] Created resource provider record via placement API for resource provider with UUID de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 and name compute-0.ctlplane.example.com.#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.586 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 26 18:09:18 np0005537197 nova_compute[189387]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.587 189391 INFO nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.588 189391 DEBUG nova.compute.provider_tree [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.588 189391 DEBUG nova.virt.libvirt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.635 189391 DEBUG nova.scheduler.client.report [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Updated inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.635 189391 DEBUG nova.compute.provider_tree [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Updating resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.636 189391 DEBUG nova.compute.provider_tree [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.767 189391 DEBUG nova.compute.provider_tree [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Updating resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.816 189391 DEBUG nova.compute.resource_tracker [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.817 189391 DEBUG oslo_concurrency.lockutils [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.817 189391 DEBUG nova.service [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.930 189391 DEBUG nova.service [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.930 189391 DEBUG nova.servicegroup.drivers.db [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.931 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:09:18 np0005537197 nova_compute[189387]: 2025-11-26 23:09:18.950 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:09:21 np0005537197 systemd-logind[819]: New session 26 of user zuul.
Nov 26 18:09:21 np0005537197 systemd[1]: Started Session 26 of User zuul.
Nov 26 18:09:23 np0005537197 python3.9[189859]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 18:09:24 np0005537197 python3.9[190015]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:09:24 np0005537197 systemd[1]: Reloading.
Nov 26 18:09:24 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:09:24 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:09:26 np0005537197 python3.9[190200]: ansible-ansible.builtin.service_facts Invoked
Nov 26 18:09:26 np0005537197 network[190217]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 18:09:26 np0005537197 network[190218]: 'network-scripts' will be removed from distribution in near future.
Nov 26 18:09:26 np0005537197 network[190219]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 18:09:27 np0005537197 podman[190225]: 2025-11-26 23:09:27.093136366 +0000 UTC m=+0.119214524 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 18:09:33 np0005537197 python3.9[190513]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:09:34 np0005537197 python3.9[190666]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:34 np0005537197 rsyslogd[1005]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 18:09:35 np0005537197 python3.9[190820]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:36 np0005537197 python3.9[190972]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:09:38 np0005537197 python3.9[191124]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 18:09:39 np0005537197 python3.9[191276]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:09:39 np0005537197 systemd[1]: Reloading.
Nov 26 18:09:39 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:09:39 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:09:40 np0005537197 python3.9[191463]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:09:41 np0005537197 python3.9[191616]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:09:42 np0005537197 python3.9[191766]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:42 np0005537197 podman[191793]: 2025-11-26 23:09:42.90118389 +0000 UTC m=+0.187904435 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 18:09:43 np0005537197 python3.9[191943]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:09:44 np0005537197 python3.9[192064]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198582.7770765-133-47868773451000/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:09:45 np0005537197 python3.9[192216]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 26 18:09:45 np0005537197 podman[192241]: 2025-11-26 23:09:45.769502216 +0000 UTC m=+0.071159009 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 26 18:09:46 np0005537197 python3.9[192389]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 26 18:09:47 np0005537197 python3.9[192543]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 26 18:09:48 np0005537197 python3.9[192701]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 26 18:09:50 np0005537197 python3.9[192859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:09:50 np0005537197 python3.9[192980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198589.5953412-201-81097043410351/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:51 np0005537197 python3.9[193130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:09:52 np0005537197 python3.9[193251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198591.1234808-201-231170986444512/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:53 np0005537197 python3.9[193401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:09:54 np0005537197 python3.9[193522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198592.6535923-201-279581604723757/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:54 np0005537197 python3.9[193672]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:55 np0005537197 python3.9[193824]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:09:56 np0005537197 python3.9[193976]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:09:57 np0005537197 python3.9[194097]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198595.9500098-260-39544686314814/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:57 np0005537197 podman[194098]: 2025-11-26 23:09:57.450693014 +0000 UTC m=+0.117235820 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 18:09:58 np0005537197 python3.9[194267]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:09:58 np0005537197 python3.9[194343]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:09:59 np0005537197 python3.9[194493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:00 np0005537197 python3.9[194614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198598.9211228-260-271195402888166/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:01 np0005537197 python3.9[194764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:01 np0005537197 python3.9[194885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198600.4671004-260-176180714976985/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:02 np0005537197 python3.9[195035]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:03 np0005537197 python3.9[195156]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198601.950963-260-165294923022573/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:04 np0005537197 python3.9[195306]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:04 np0005537197 python3.9[195427]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198603.3170962-260-207734753175142/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:05 np0005537197 python3.9[195577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:05 np0005537197 python3.9[195698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198604.8424578-260-125860478210539/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:06 np0005537197 python3.9[195848]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:07 np0005537197 python3.9[195969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198606.0983274-260-245052189208959/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:08 np0005537197 python3.9[196119]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:08 np0005537197 python3.9[196240]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198607.562023-260-208465710820732/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:09 np0005537197 python3.9[196390]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:10:09.611 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:10:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:10:09.611 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:10:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:10:09.611 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:10:10 np0005537197 python3.9[196511]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198608.8428905-260-259025378019792/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:10 np0005537197 python3.9[196661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:11 np0005537197 python3.9[196782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198610.2355924-260-117771263380874/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:12 np0005537197 python3.9[196932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:12 np0005537197 python3.9[197008]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:13 np0005537197 podman[197132]: 2025-11-26 23:10:13.741829376 +0000 UTC m=+0.153020964 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 26 18:10:13 np0005537197 python3.9[197171]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:14 np0005537197 python3.9[197258]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:15 np0005537197 python3.9[197408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:15 np0005537197 python3.9[197484]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:15 np0005537197 podman[197485]: 2025-11-26 23:10:15.985410292 +0000 UTC m=+0.077866309 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.127 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.127 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.144 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.144 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.144 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.144 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.146 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.146 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.171 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.171 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.171 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.172 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.376 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.378 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6044MB free_disk=72.60869216918945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.379 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.379 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.440 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.440 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.462 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.483 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.485 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 18:10:16 np0005537197 nova_compute[189387]: 2025-11-26 23:10:16.485 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:10:16 np0005537197 python3.9[197655]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:17 np0005537197 python3.9[197807]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:18 np0005537197 python3.9[197959]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:10:19 np0005537197 python3.9[198111]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:10:19 np0005537197 systemd[1]: Reloading.
Nov 26 18:10:19 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:10:19 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:10:19 np0005537197 systemd[1]: Listening on Podman API Socket.
Nov 26 18:10:21 np0005537197 python3.9[198303]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:21 np0005537197 python3.9[198426]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198620.3609047-482-33337981707680/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:10:22 np0005537197 python3.9[198502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:23 np0005537197 python3.9[198625]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198620.3609047-482-33337981707680/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:10:24 np0005537197 python3.9[198777]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 26 18:10:25 np0005537197 python3.9[198929]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:10:26 np0005537197 python3[199081]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:10:26 np0005537197 podman[199121]: 2025-11-26 23:10:26.93729341 +0000 UTC m=+0.075220238 container create bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Nov 26 18:10:26 np0005537197 podman[199121]: 2025-11-26 23:10:26.906570066 +0000 UTC m=+0.044496894 image pull 64a16ed7692810b1a8f0a7e67b7d8c7ca1d63d1a94542312fec7e65db8b42eda quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 26 18:10:26 np0005537197 python3[199081]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Nov 26 18:10:27 np0005537197 podman[199283]: 2025-11-26 23:10:27.754224634 +0000 UTC m=+0.093164829 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 18:10:27 np0005537197 python3.9[199331]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:10:28 np0005537197 python3.9[199486]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:29 np0005537197 python3.9[199637]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198629.0283284-546-138204922330118/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:30 np0005537197 python3.9[199713]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:10:30 np0005537197 systemd[1]: Reloading.
Nov 26 18:10:31 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:10:31 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:10:32 np0005537197 python3.9[199824]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:10:33 np0005537197 systemd[1]: Reloading.
Nov 26 18:10:33 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:10:33 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:10:33 np0005537197 systemd[1]: Starting ceilometer_agent_compute container...
Nov 26 18:10:33 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:10:33 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:33 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:33 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:33 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:33 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.
Nov 26 18:10:33 np0005537197 podman[199864]: 2025-11-26 23:10:33.618824099 +0000 UTC m=+0.160280219 container init bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + sudo -E kolla_set_configs
Nov 26 18:10:33 np0005537197 podman[199864]: 2025-11-26 23:10:33.650760826 +0000 UTC m=+0.192216926 container start bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: sudo: unable to send audit message: Operation not permitted
Nov 26 18:10:33 np0005537197 podman[199864]: ceilometer_agent_compute
Nov 26 18:10:33 np0005537197 systemd[1]: Started ceilometer_agent_compute container.
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Validating config file
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Copying service configuration files
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: INFO:__main__:Writing out command to execute
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: ++ cat /run_command
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + ARGS=
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + sudo kolla_copy_cacerts
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: sudo: unable to send audit message: Operation not permitted
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + [[ ! -n '' ]]
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + . kolla_extend_start
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + umask 0022
Nov 26 18:10:33 np0005537197 ceilometer_agent_compute[199879]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 26 18:10:33 np0005537197 podman[199885]: 2025-11-26 23:10:33.786536775 +0000 UTC m=+0.118222630 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 18:10:33 np0005537197 systemd[1]: bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-3fa952597db22dd0.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:10:33 np0005537197 systemd[1]: bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-3fa952597db22dd0.service: Failed with result 'exit-code'.
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.615 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.615 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.615 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.615 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.615 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.615 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.615 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.616 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.617 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.618 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.619 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.620 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.621 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.622 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.623 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.624 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.625 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.626 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.626 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.626 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.626 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.649 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.650 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.650 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.650 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.650 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.650 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.650 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.650 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.651 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.652 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.653 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.654 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.655 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.656 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.657 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.658 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.659 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.660 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.660 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.662 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.666 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.666 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 26 18:10:34 np0005537197 python3.9[200065]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:10:34 np0005537197 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.789 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.884 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.892 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.892 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.892 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.893 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.894 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 26 18:10:34 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:34.894 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.008 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.009 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.009 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.009 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.009 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.009 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.009 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.009 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.010 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.011 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.011 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.011 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.011 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.011 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.011 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.011 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.012 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.013 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.014 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.015 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.016 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.017 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.018 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.019 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.020 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.021 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.022 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.023 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.024 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.025 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.026 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.027 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.027 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.027 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.027 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.027 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.027 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.027 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.028 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[199879]: 2025-11-26 23:10:35.040 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 26 18:10:35 np0005537197 virtqemud[188953]: End of file while reading data: Input/output error
Nov 26 18:10:35 np0005537197 systemd[1]: libpod-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope: Deactivated successfully.
Nov 26 18:10:35 np0005537197 systemd[1]: libpod-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope: Consumed 1.627s CPU time.
Nov 26 18:10:35 np0005537197 podman[200077]: 2025-11-26 23:10:35.220284808 +0000 UTC m=+0.472641303 container died bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 18:10:35 np0005537197 systemd[1]: bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-3fa952597db22dd0.timer: Deactivated successfully.
Nov 26 18:10:35 np0005537197 systemd[1]: Stopped /usr/bin/podman healthcheck run bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.
Nov 26 18:10:35 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-userdata-shm.mount: Deactivated successfully.
Nov 26 18:10:35 np0005537197 systemd[1]: var-lib-containers-storage-overlay-f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9-merged.mount: Deactivated successfully.
Nov 26 18:10:35 np0005537197 podman[200077]: 2025-11-26 23:10:35.28934864 +0000 UTC m=+0.541705135 container cleanup bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 18:10:35 np0005537197 podman[200077]: ceilometer_agent_compute
Nov 26 18:10:35 np0005537197 podman[200110]: ceilometer_agent_compute
Nov 26 18:10:35 np0005537197 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 26 18:10:35 np0005537197 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 26 18:10:35 np0005537197 systemd[1]: Starting ceilometer_agent_compute container...
Nov 26 18:10:35 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:10:35 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:35 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:35 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:35 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f22d84232c0f7d024279348a02ab2fce57b857bc242380f78889be89afacccd9/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:35 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.
Nov 26 18:10:35 np0005537197 podman[200123]: 2025-11-26 23:10:35.608759964 +0000 UTC m=+0.170726798 container init bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true)
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + sudo -E kolla_set_configs
Nov 26 18:10:35 np0005537197 podman[200123]: 2025-11-26 23:10:35.648771458 +0000 UTC m=+0.210738242 container start bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: sudo: unable to send audit message: Operation not permitted
Nov 26 18:10:35 np0005537197 podman[200123]: ceilometer_agent_compute
Nov 26 18:10:35 np0005537197 systemd[1]: Started ceilometer_agent_compute container.
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Validating config file
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Copying service configuration files
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: INFO:__main__:Writing out command to execute
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: ++ cat /run_command
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + ARGS=
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + sudo kolla_copy_cacerts
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: sudo: unable to send audit message: Operation not permitted
Nov 26 18:10:35 np0005537197 podman[200146]: 2025-11-26 23:10:35.772772502 +0000 UTC m=+0.105762506 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + [[ ! -n '' ]]
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + . kolla_extend_start
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + umask 0022
Nov 26 18:10:35 np0005537197 ceilometer_agent_compute[200139]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 26 18:10:35 np0005537197 systemd[1]: bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-1157307bff510f04.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:10:35 np0005537197 systemd[1]: bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-1157307bff510f04.service: Failed with result 'exit-code'.
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.544 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.545 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.545 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.545 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.545 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.545 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.545 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.546 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.547 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.547 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.547 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.547 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.547 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.547 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.547 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.548 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.548 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.548 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.548 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.548 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.548 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.548 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.549 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.550 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.551 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.552 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.553 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.554 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.554 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.554 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.554 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.554 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.554 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.554 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.555 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.555 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.555 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.555 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.555 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.555 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.555 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.556 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.557 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.558 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.558 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.558 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.558 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.558 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.558 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.558 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.559 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.559 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.560 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.560 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.561 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.561 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.562 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.563 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.563 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.563 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.564 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.565 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.565 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 18:10:36 np0005537197 python3.9[200322]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.589 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.590 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.590 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.590 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.590 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.590 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.591 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.592 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.593 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.594 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.595 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.596 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.597 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.598 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.599 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.600 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.601 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.602 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.603 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.606 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.608 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.609 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.627 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.638 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.639 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.639 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.795 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.795 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.795 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.795 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.795 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.795 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.796 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.797 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.798 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.799 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.800 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.801 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.802 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.803 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.804 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.805 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.806 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.807 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.808 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.812 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.835 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.836 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.836 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.837 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce4690710>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.852 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:10:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:10:37 np0005537197 python3.9[200458]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198635.9599829-578-226540874519626/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:10:38 np0005537197 python3.9[200610]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 26 18:10:39 np0005537197 python3.9[200762]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:10:40 np0005537197 python3[200914]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:10:40 np0005537197 podman[200950]: 2025-11-26 23:10:40.519290858 +0000 UTC m=+0.065452966 container create 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter)
Nov 26 18:10:40 np0005537197 podman[200950]: 2025-11-26 23:10:40.48353609 +0000 UTC m=+0.029698208 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 26 18:10:40 np0005537197 python3[200914]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 26 18:10:41 np0005537197 python3.9[201139]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:10:42 np0005537197 python3.9[201293]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:43 np0005537197 python3.9[201444]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198642.6420634-631-118882939115601/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:44 np0005537197 podman[201520]: 2025-11-26 23:10:44.024767859 +0000 UTC m=+0.151164206 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 18:10:44 np0005537197 python3.9[201521]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:10:44 np0005537197 systemd[1]: Reloading.
Nov 26 18:10:44 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:10:44 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:10:45 np0005537197 python3.9[201656]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:10:46 np0005537197 systemd[1]: Reloading.
Nov 26 18:10:46 np0005537197 podman[201659]: 2025-11-26 23:10:46.478412627 +0000 UTC m=+0.092944393 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 18:10:46 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:10:46 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:10:46 np0005537197 systemd[1]: Starting node_exporter container...
Nov 26 18:10:46 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:10:46 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39057cfaea66f24aeb9319e5fb9d9ae7fbf05eddb6cfdf9a113596ce0e86c59/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:46 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39057cfaea66f24aeb9319e5fb9d9ae7fbf05eddb6cfdf9a113596ce0e86c59/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:46 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.
Nov 26 18:10:46 np0005537197 podman[201717]: 2025-11-26 23:10:46.912812824 +0000 UTC m=+0.159297192 container init 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.932Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.932Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.933Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.933Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.934Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.934Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=arp
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=bcache
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=bonding
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=cpu
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=edac
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=filefd
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=netclass
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=netdev
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=netstat
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=nfs
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.935Z caller=node_exporter.go:117 level=info collector=nvme
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=softnet
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=systemd
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=xfs
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.936Z caller=node_exporter.go:117 level=info collector=zfs
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.937Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 26 18:10:46 np0005537197 node_exporter[201732]: ts=2025-11-26T23:10:46.938Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 26 18:10:46 np0005537197 podman[201717]: 2025-11-26 23:10:46.94734904 +0000 UTC m=+0.193833378 container start 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:10:46 np0005537197 podman[201717]: node_exporter
Nov 26 18:10:46 np0005537197 systemd[1]: Started node_exporter container.
Nov 26 18:10:47 np0005537197 podman[201742]: 2025-11-26 23:10:47.063904695 +0000 UTC m=+0.095575383 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:10:47 np0005537197 auditd[702]: Audit daemon rotating log files
Nov 26 18:10:47 np0005537197 python3.9[201918]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:10:48 np0005537197 systemd[1]: Stopping node_exporter container...
Nov 26 18:10:48 np0005537197 systemd[1]: libpod-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope: Deactivated successfully.
Nov 26 18:10:48 np0005537197 podman[201922]: 2025-11-26 23:10:48.139393271 +0000 UTC m=+0.071594180 container died 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:10:48 np0005537197 systemd[1]: 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262-2ba72d5fc05f4844.timer: Deactivated successfully.
Nov 26 18:10:48 np0005537197 systemd[1]: Stopped /usr/bin/podman healthcheck run 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.
Nov 26 18:10:48 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262-userdata-shm.mount: Deactivated successfully.
Nov 26 18:10:48 np0005537197 systemd[1]: var-lib-containers-storage-overlay-e39057cfaea66f24aeb9319e5fb9d9ae7fbf05eddb6cfdf9a113596ce0e86c59-merged.mount: Deactivated successfully.
Nov 26 18:10:48 np0005537197 podman[201922]: 2025-11-26 23:10:48.189277979 +0000 UTC m=+0.121478878 container cleanup 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 18:10:48 np0005537197 podman[201922]: node_exporter
Nov 26 18:10:48 np0005537197 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 26 18:10:48 np0005537197 podman[201952]: node_exporter
Nov 26 18:10:48 np0005537197 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 26 18:10:48 np0005537197 systemd[1]: Stopped node_exporter container.
Nov 26 18:10:48 np0005537197 systemd[1]: Starting node_exporter container...
Nov 26 18:10:48 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:10:48 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39057cfaea66f24aeb9319e5fb9d9ae7fbf05eddb6cfdf9a113596ce0e86c59/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:48 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e39057cfaea66f24aeb9319e5fb9d9ae7fbf05eddb6cfdf9a113596ce0e86c59/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:10:48 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.
Nov 26 18:10:48 np0005537197 podman[201965]: 2025-11-26 23:10:48.523491011 +0000 UTC m=+0.182379371 container init 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.546Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.546Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.546Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.547Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.547Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.547Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=arp
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=bcache
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=bonding
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=cpu
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=edac
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=filefd
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=netclass
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=netdev
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=netstat
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=nfs
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=nvme
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=softnet
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=systemd
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.548Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.549Z caller=node_exporter.go:117 level=info collector=xfs
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.549Z caller=node_exporter.go:117 level=info collector=zfs
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.549Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 26 18:10:48 np0005537197 node_exporter[201980]: ts=2025-11-26T23:10:48.550Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 26 18:10:48 np0005537197 podman[201965]: 2025-11-26 23:10:48.565636641 +0000 UTC m=+0.224524961 container start 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 18:10:48 np0005537197 podman[201965]: node_exporter
Nov 26 18:10:48 np0005537197 systemd[1]: Started node_exporter container.
Nov 26 18:10:48 np0005537197 podman[201990]: 2025-11-26 23:10:48.670528853 +0000 UTC m=+0.087096506 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:10:49 np0005537197 python3.9[202165]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:10:50 np0005537197 python3.9[202288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198648.9351816-663-81236792628057/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:10:51 np0005537197 python3.9[202440]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 26 18:10:52 np0005537197 python3.9[202592]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:10:53 np0005537197 python3[202744]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:10:54 np0005537197 podman[202757]: 2025-11-26 23:10:54.893755144 +0000 UTC m=+1.509553517 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 26 18:10:55 np0005537197 podman[202853]: 2025-11-26 23:10:55.057567206 +0000 UTC m=+0.055238953 container create 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 18:10:55 np0005537197 podman[202853]: 2025-11-26 23:10:55.022969438 +0000 UTC m=+0.020641275 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 26 18:10:55 np0005537197 python3[202744]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Nov 26 18:10:56 np0005537197 python3.9[203043]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:10:57 np0005537197 python3.9[203197]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:57 np0005537197 python3.9[203348]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198657.127475-716-143888383361306/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:10:58 np0005537197 podman[203396]: 2025-11-26 23:10:58.349822679 +0000 UTC m=+0.098905093 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 18:10:58 np0005537197 python3.9[203443]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:10:58 np0005537197 systemd[1]: Reloading.
Nov 26 18:10:58 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:10:58 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:10:59 np0005537197 python3.9[203555]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:10:59 np0005537197 systemd[1]: Reloading.
Nov 26 18:10:59 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:10:59 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:11:00 np0005537197 systemd[1]: Starting podman_exporter container...
Nov 26 18:11:00 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:11:00 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200fa2c37faf0aff755e2e56bc48c9632aa4533029a08728c2680686aea3484c/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:00 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200fa2c37faf0aff755e2e56bc48c9632aa4533029a08728c2680686aea3484c/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:00 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.
Nov 26 18:11:00 np0005537197 podman[203595]: 2025-11-26 23:11:00.219767947 +0000 UTC m=+0.156556418 container init 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:11:00 np0005537197 podman_exporter[203610]: ts=2025-11-26T23:11:00.236Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 26 18:11:00 np0005537197 podman_exporter[203610]: ts=2025-11-26T23:11:00.236Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 26 18:11:00 np0005537197 podman_exporter[203610]: ts=2025-11-26T23:11:00.236Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 26 18:11:00 np0005537197 podman_exporter[203610]: ts=2025-11-26T23:11:00.236Z caller=handler.go:105 level=info collector=container
Nov 26 18:11:00 np0005537197 podman[203595]: 2025-11-26 23:11:00.258453474 +0000 UTC m=+0.195241925 container start 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 18:11:00 np0005537197 podman[203595]: podman_exporter
Nov 26 18:11:00 np0005537197 systemd[1]: Starting Podman API Service...
Nov 26 18:11:00 np0005537197 systemd[1]: Started podman_exporter container.
Nov 26 18:11:00 np0005537197 systemd[1]: Started Podman API Service.
Nov 26 18:11:00 np0005537197 podman[203621]: time="2025-11-26T23:11:00Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 26 18:11:00 np0005537197 podman[203621]: time="2025-11-26T23:11:00Z" level=info msg="Setting parallel job count to 25"
Nov 26 18:11:00 np0005537197 podman[203621]: time="2025-11-26T23:11:00Z" level=info msg="Using sqlite as database backend"
Nov 26 18:11:00 np0005537197 podman[203621]: time="2025-11-26T23:11:00Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 26 18:11:00 np0005537197 podman[203621]: time="2025-11-26T23:11:00Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 26 18:11:00 np0005537197 podman[203621]: time="2025-11-26T23:11:00Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 26 18:11:00 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:11:00 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 26 18:11:00 np0005537197 podman[203621]: time="2025-11-26T23:11:00Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 18:11:00 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:11:00 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19585 "" "Go-http-client/1.1"
Nov 26 18:11:00 np0005537197 podman_exporter[203610]: ts=2025-11-26T23:11:00.363Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 26 18:11:00 np0005537197 podman_exporter[203610]: ts=2025-11-26T23:11:00.363Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 26 18:11:00 np0005537197 podman_exporter[203610]: ts=2025-11-26T23:11:00.364Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 26 18:11:00 np0005537197 podman[203620]: 2025-11-26 23:11:00.371039393 +0000 UTC m=+0.088631488 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:11:00 np0005537197 systemd[1]: 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897-b5a8392f8307d88.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:11:00 np0005537197 systemd[1]: 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897-b5a8392f8307d88.service: Failed with result 'exit-code'.
Nov 26 18:11:01 np0005537197 python3.9[203810]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:11:01 np0005537197 systemd[1]: Stopping podman_exporter container...
Nov 26 18:11:01 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:11:00 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Nov 26 18:11:01 np0005537197 systemd[1]: libpod-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope: Deactivated successfully.
Nov 26 18:11:01 np0005537197 conmon[203610]: conmon 28f8ec2f1010e38a0885 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope/container/memory.events
Nov 26 18:11:01 np0005537197 podman[203814]: 2025-11-26 23:11:01.452700375 +0000 UTC m=+0.070248134 container died 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 18:11:01 np0005537197 systemd[1]: 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897-b5a8392f8307d88.timer: Deactivated successfully.
Nov 26 18:11:01 np0005537197 systemd[1]: Stopped /usr/bin/podman healthcheck run 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.
Nov 26 18:11:01 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897-userdata-shm.mount: Deactivated successfully.
Nov 26 18:11:01 np0005537197 systemd[1]: var-lib-containers-storage-overlay-200fa2c37faf0aff755e2e56bc48c9632aa4533029a08728c2680686aea3484c-merged.mount: Deactivated successfully.
Nov 26 18:11:01 np0005537197 podman[203814]: 2025-11-26 23:11:01.681249893 +0000 UTC m=+0.298797662 container cleanup 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:11:01 np0005537197 podman[203814]: podman_exporter
Nov 26 18:11:01 np0005537197 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 26 18:11:01 np0005537197 podman[203841]: podman_exporter
Nov 26 18:11:01 np0005537197 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 26 18:11:01 np0005537197 systemd[1]: Stopped podman_exporter container.
Nov 26 18:11:01 np0005537197 systemd[1]: Starting podman_exporter container...
Nov 26 18:11:01 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:11:01 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200fa2c37faf0aff755e2e56bc48c9632aa4533029a08728c2680686aea3484c/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:01 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/200fa2c37faf0aff755e2e56bc48c9632aa4533029a08728c2680686aea3484c/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:01 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.
Nov 26 18:11:01 np0005537197 podman[203854]: 2025-11-26 23:11:01.981677589 +0000 UTC m=+0.171888300 container init 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 18:11:02 np0005537197 podman_exporter[203869]: ts=2025-11-26T23:11:02.008Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 26 18:11:02 np0005537197 podman_exporter[203869]: ts=2025-11-26T23:11:02.008Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 26 18:11:02 np0005537197 podman_exporter[203869]: ts=2025-11-26T23:11:02.008Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 26 18:11:02 np0005537197 podman_exporter[203869]: ts=2025-11-26T23:11:02.008Z caller=handler.go:105 level=info collector=container
Nov 26 18:11:02 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:11:02 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 26 18:11:02 np0005537197 podman[203621]: time="2025-11-26T23:11:02Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 18:11:02 np0005537197 podman[203854]: 2025-11-26 23:11:02.018227388 +0000 UTC m=+0.208438069 container start 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 18:11:02 np0005537197 podman[203854]: podman_exporter
Nov 26 18:11:02 np0005537197 systemd[1]: Started podman_exporter container.
Nov 26 18:11:02 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:11:02 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19587 "" "Go-http-client/1.1"
Nov 26 18:11:02 np0005537197 podman_exporter[203869]: ts=2025-11-26T23:11:02.038Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 26 18:11:02 np0005537197 podman_exporter[203869]: ts=2025-11-26T23:11:02.039Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 26 18:11:02 np0005537197 podman_exporter[203869]: ts=2025-11-26T23:11:02.040Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 26 18:11:02 np0005537197 podman[203878]: 2025-11-26 23:11:02.12685645 +0000 UTC m=+0.089921391 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:11:02 np0005537197 python3.9[204055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:11:03 np0005537197 python3.9[204178]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198662.323825-748-224019924573837/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:11:04 np0005537197 python3.9[204330]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 26 18:11:05 np0005537197 python3.9[204482]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:11:06 np0005537197 podman[204606]: 2025-11-26 23:11:06.520044293 +0000 UTC m=+0.071140058 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 18:11:06 np0005537197 systemd[1]: bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-1157307bff510f04.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:11:06 np0005537197 systemd[1]: bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517-1157307bff510f04.service: Failed with result 'exit-code'.
Nov 26 18:11:06 np0005537197 python3[204653]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:11:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:11:09.611 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:11:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:11:09.613 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:11:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:11:09.613 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:11:09 np0005537197 podman[204667]: 2025-11-26 23:11:09.638907129 +0000 UTC m=+2.759299036 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 26 18:11:09 np0005537197 podman[204763]: 2025-11-26 23:11:09.806341337 +0000 UTC m=+0.060715198 container create db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, vendor=Red Hat, Inc., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal)
Nov 26 18:11:09 np0005537197 podman[204763]: 2025-11-26 23:11:09.772771617 +0000 UTC m=+0.027145508 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 26 18:11:09 np0005537197 python3[204653]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 26 18:11:10 np0005537197 python3.9[204953]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:11:12 np0005537197 python3.9[205107]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:12 np0005537197 python3.9[205258]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198672.1363351-801-50947228411897/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:13 np0005537197 python3.9[205334]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:11:13 np0005537197 systemd[1]: Reloading.
Nov 26 18:11:13 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:11:13 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:11:14 np0005537197 podman[205416]: 2025-11-26 23:11:14.390522313 +0000 UTC m=+0.195456519 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 18:11:14 np0005537197 python3.9[205460]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:11:14 np0005537197 systemd[1]: Reloading.
Nov 26 18:11:14 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:11:14 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:11:14 np0005537197 systemd[1]: Starting openstack_network_exporter container...
Nov 26 18:11:15 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:11:15 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc13b2492a21bf1aea80b58b8780b2e534408293d4ecd8e596791ca456feac96/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:15 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc13b2492a21bf1aea80b58b8780b2e534408293d4ecd8e596791ca456feac96/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:15 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc13b2492a21bf1aea80b58b8780b2e534408293d4ecd8e596791ca456feac96/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:15 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.
Nov 26 18:11:15 np0005537197 podman[205506]: 2025-11-26 23:11:15.208666949 +0000 UTC m=+0.187550500 container init db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, version=9.6, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *bridge.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *coverage.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *datapath.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *iface.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *memory.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *ovnnorthd.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *ovn.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *ovsdbserver.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *pmd_perf.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *pmd_rxq.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: INFO    23:11:15 main.go:48: registering *vswitch.Collector
Nov 26 18:11:15 np0005537197 openstack_network_exporter[205522]: NOTICE  23:11:15 main.go:76: listening on https://:9105/metrics
Nov 26 18:11:15 np0005537197 podman[205506]: 2025-11-26 23:11:15.253504264 +0000 UTC m=+0.232387765 container start db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, config_id=edpm, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 18:11:15 np0005537197 podman[205506]: openstack_network_exporter
Nov 26 18:11:15 np0005537197 systemd[1]: Started openstack_network_exporter container.
Nov 26 18:11:15 np0005537197 podman[205532]: 2025-11-26 23:11:15.377551519 +0000 UTC m=+0.105220723 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 26 18:11:16 np0005537197 python3.9[205706]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:11:16 np0005537197 systemd[1]: Stopping openstack_network_exporter container...
Nov 26 18:11:16 np0005537197 systemd[1]: libpod-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope: Deactivated successfully.
Nov 26 18:11:16 np0005537197 podman[205710]: 2025-11-26 23:11:16.232409964 +0000 UTC m=+0.068522414 container died db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter)
Nov 26 18:11:16 np0005537197 systemd[1]: db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec-355266c5758af9e8.timer: Deactivated successfully.
Nov 26 18:11:16 np0005537197 systemd[1]: Stopped /usr/bin/podman healthcheck run db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.
Nov 26 18:11:16 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec-userdata-shm.mount: Deactivated successfully.
Nov 26 18:11:16 np0005537197 systemd[1]: var-lib-containers-storage-overlay-dc13b2492a21bf1aea80b58b8780b2e534408293d4ecd8e596791ca456feac96-merged.mount: Deactivated successfully.
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.477 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.531 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.531 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.532 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.532 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.532 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.566 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.566 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.566 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.566 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.764 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.765 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5889MB free_disk=72.4397087097168GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.765 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.766 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:11:16 np0005537197 podman[205710]: 2025-11-26 23:11:16.820409547 +0000 UTC m=+0.656521987 container cleanup db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=edpm, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 18:11:16 np0005537197 podman[205710]: openstack_network_exporter
Nov 26 18:11:16 np0005537197 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.860 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.860 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.902 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.923 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 18:11:16 np0005537197 podman[205739]: openstack_network_exporter
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.925 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 18:11:16 np0005537197 nova_compute[189387]: 2025-11-26 23:11:16.926 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:11:16 np0005537197 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 26 18:11:16 np0005537197 systemd[1]: Stopped openstack_network_exporter container.
Nov 26 18:11:16 np0005537197 podman[205738]: 2025-11-26 23:11:16.947960689 +0000 UTC m=+0.084651152 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 26 18:11:16 np0005537197 systemd[1]: Starting openstack_network_exporter container...
Nov 26 18:11:17 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:11:17 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc13b2492a21bf1aea80b58b8780b2e534408293d4ecd8e596791ca456feac96/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:17 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc13b2492a21bf1aea80b58b8780b2e534408293d4ecd8e596791ca456feac96/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:17 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc13b2492a21bf1aea80b58b8780b2e534408293d4ecd8e596791ca456feac96/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:11:17 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.
Nov 26 18:11:17 np0005537197 podman[205769]: 2025-11-26 23:11:17.13304406 +0000 UTC m=+0.160468999 container init db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *bridge.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *coverage.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *datapath.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *iface.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *memory.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *ovnnorthd.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *ovn.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *ovsdbserver.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *pmd_perf.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *pmd_rxq.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: INFO    23:11:17 main.go:48: registering *vswitch.Collector
Nov 26 18:11:17 np0005537197 openstack_network_exporter[205787]: NOTICE  23:11:17 main.go:76: listening on https://:9105/metrics
Nov 26 18:11:17 np0005537197 podman[205769]: 2025-11-26 23:11:17.166429027 +0000 UTC m=+0.193853916 container start db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 18:11:17 np0005537197 podman[205769]: openstack_network_exporter
Nov 26 18:11:17 np0005537197 systemd[1]: Started openstack_network_exporter container.
Nov 26 18:11:17 np0005537197 podman[205797]: 2025-11-26 23:11:17.279287223 +0000 UTC m=+0.090158276 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 18:11:17 np0005537197 nova_compute[189387]: 2025-11-26 23:11:17.519 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:17 np0005537197 nova_compute[189387]: 2025-11-26 23:11:17.519 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:17 np0005537197 nova_compute[189387]: 2025-11-26 23:11:17.519 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 18:11:17 np0005537197 nova_compute[189387]: 2025-11-26 23:11:17.520 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 18:11:17 np0005537197 nova_compute[189387]: 2025-11-26 23:11:17.541 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 18:11:17 np0005537197 nova_compute[189387]: 2025-11-26 23:11:17.543 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:18 np0005537197 python3.9[205967]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 18:11:18 np0005537197 nova_compute[189387]: 2025-11-26 23:11:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:18 np0005537197 nova_compute[189387]: 2025-11-26 23:11:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:11:19 np0005537197 podman[206091]: 2025-11-26 23:11:19.173344321 +0000 UTC m=+0.081958617 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:11:19 np0005537197 python3.9[206144]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 26 18:11:20 np0005537197 python3.9[206310]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:20 np0005537197 systemd[1]: Started libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope.
Nov 26 18:11:20 np0005537197 podman[206311]: 2025-11-26 23:11:20.569170012 +0000 UTC m=+0.071669122 container exec 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 18:11:20 np0005537197 podman[206311]: 2025-11-26 23:11:20.606432677 +0000 UTC m=+0.108931777 container exec_died 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:11:20 np0005537197 systemd[1]: libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope: Deactivated successfully.
Nov 26 18:11:21 np0005537197 python3.9[206494]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:21 np0005537197 systemd[1]: Started libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope.
Nov 26 18:11:21 np0005537197 podman[206495]: 2025-11-26 23:11:21.411857569 +0000 UTC m=+0.067376313 container exec 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 18:11:21 np0005537197 podman[206495]: 2025-11-26 23:11:21.445466492 +0000 UTC m=+0.100985216 container exec_died 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 26 18:11:21 np0005537197 systemd[1]: libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope: Deactivated successfully.
Nov 26 18:11:22 np0005537197 python3.9[206675]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:23 np0005537197 python3.9[206827]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 26 18:11:23 np0005537197 python3.9[206993]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:23 np0005537197 systemd[1]: Started libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope.
Nov 26 18:11:23 np0005537197 podman[206994]: 2025-11-26 23:11:23.896348719 +0000 UTC m=+0.080601740 container exec b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 18:11:23 np0005537197 podman[206994]: 2025-11-26 23:11:23.933407438 +0000 UTC m=+0.117660459 container exec_died b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 26 18:11:23 np0005537197 systemd[1]: libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope: Deactivated successfully.
Nov 26 18:11:24 np0005537197 python3.9[207176]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:24 np0005537197 systemd[1]: Started libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope.
Nov 26 18:11:24 np0005537197 podman[207177]: 2025-11-26 23:11:24.991569779 +0000 UTC m=+0.104073822 container exec b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 18:11:25 np0005537197 podman[207177]: 2025-11-26 23:11:25.023410653 +0000 UTC m=+0.135914666 container exec_died b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 26 18:11:25 np0005537197 systemd[1]: libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope: Deactivated successfully.
Nov 26 18:11:25 np0005537197 python3.9[207358]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:26 np0005537197 python3.9[207510]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 26 18:11:27 np0005537197 python3.9[207673]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:27 np0005537197 systemd[1]: Started libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope.
Nov 26 18:11:27 np0005537197 podman[207674]: 2025-11-26 23:11:27.962541702 +0000 UTC m=+0.099083724 container exec 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 18:11:27 np0005537197 podman[207674]: 2025-11-26 23:11:27.996819053 +0000 UTC m=+0.133361075 container exec_died 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 18:11:28 np0005537197 systemd[1]: libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope: Deactivated successfully.
Nov 26 18:11:28 np0005537197 podman[207829]: 2025-11-26 23:11:28.62889707 +0000 UTC m=+0.064768690 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:11:28 np0005537197 python3.9[207878]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:28 np0005537197 systemd[1]: Started libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope.
Nov 26 18:11:28 np0005537197 podman[207879]: 2025-11-26 23:11:28.968345968 +0000 UTC m=+0.101570482 container exec 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Nov 26 18:11:29 np0005537197 podman[207879]: 2025-11-26 23:11:29.004642757 +0000 UTC m=+0.137867231 container exec_died 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 18:11:29 np0005537197 systemd[1]: libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope: Deactivated successfully.
Nov 26 18:11:29 np0005537197 python3.9[208061]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:30 np0005537197 python3.9[208213]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 26 18:11:31 np0005537197 python3.9[208378]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:31 np0005537197 systemd[1]: Started libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope.
Nov 26 18:11:31 np0005537197 podman[208379]: 2025-11-26 23:11:31.967127155 +0000 UTC m=+0.098228069 container exec bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:11:31 np0005537197 podman[208379]: 2025-11-26 23:11:31.999117824 +0000 UTC m=+0.130218678 container exec_died bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 18:11:32 np0005537197 systemd[1]: libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope: Deactivated successfully.
Nov 26 18:11:32 np0005537197 podman[208536]: 2025-11-26 23:11:32.769031036 +0000 UTC m=+0.084733003 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 18:11:33 np0005537197 python3.9[208587]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:33 np0005537197 systemd[1]: Started libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope.
Nov 26 18:11:33 np0005537197 podman[208588]: 2025-11-26 23:11:33.170409415 +0000 UTC m=+0.108757462 container exec bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 18:11:33 np0005537197 podman[208588]: 2025-11-26 23:11:33.214830219 +0000 UTC m=+0.153178246 container exec_died bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 18:11:33 np0005537197 systemd[1]: libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope: Deactivated successfully.
Nov 26 18:11:34 np0005537197 python3.9[208772]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:34 np0005537197 python3.9[208924]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 26 18:11:35 np0005537197 python3.9[209090]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:36 np0005537197 systemd[1]: Started libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope.
Nov 26 18:11:36 np0005537197 podman[209091]: 2025-11-26 23:11:36.079354094 +0000 UTC m=+0.086126443 container exec 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:11:36 np0005537197 podman[209091]: 2025-11-26 23:11:36.113528733 +0000 UTC m=+0.120301092 container exec_died 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:11:36 np0005537197 systemd[1]: libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope: Deactivated successfully.
Nov 26 18:11:36 np0005537197 podman[209244]: 2025-11-26 23:11:36.767347614 +0000 UTC m=+0.090929037 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 18:11:36 np0005537197 python3.9[209292]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:37 np0005537197 systemd[1]: Started libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope.
Nov 26 18:11:37 np0005537197 podman[209293]: 2025-11-26 23:11:37.131371745 +0000 UTC m=+0.109001759 container exec 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:11:37 np0005537197 podman[209293]: 2025-11-26 23:11:37.164382622 +0000 UTC m=+0.142012556 container exec_died 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 18:11:37 np0005537197 systemd[1]: libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope: Deactivated successfully.
Nov 26 18:11:37 np0005537197 python3.9[209477]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:39 np0005537197 python3.9[209629]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 26 18:11:40 np0005537197 python3.9[209794]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:40 np0005537197 systemd[1]: Started libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope.
Nov 26 18:11:40 np0005537197 podman[209795]: 2025-11-26 23:11:40.281347569 +0000 UTC m=+0.091073911 container exec 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 18:11:40 np0005537197 podman[209795]: 2025-11-26 23:11:40.315622441 +0000 UTC m=+0.125348743 container exec_died 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 18:11:40 np0005537197 systemd[1]: libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope: Deactivated successfully.
Nov 26 18:11:41 np0005537197 python3.9[209977]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:41 np0005537197 systemd[1]: Started libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope.
Nov 26 18:11:41 np0005537197 podman[209978]: 2025-11-26 23:11:41.301902296 +0000 UTC m=+0.094832565 container exec 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 18:11:41 np0005537197 podman[209978]: 2025-11-26 23:11:41.330809408 +0000 UTC m=+0.123739647 container exec_died 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:11:41 np0005537197 systemd[1]: libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope: Deactivated successfully.
Nov 26 18:11:42 np0005537197 python3.9[210162]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:42 np0005537197 python3.9[210314]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 26 18:11:43 np0005537197 python3.9[210479]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:43 np0005537197 systemd[1]: Started libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope.
Nov 26 18:11:44 np0005537197 podman[210480]: 2025-11-26 23:11:44.004909646 +0000 UTC m=+0.091023319 container exec db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, version=9.6, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64)
Nov 26 18:11:44 np0005537197 podman[210480]: 2025-11-26 23:11:44.039346253 +0000 UTC m=+0.125459936 container exec_died db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Nov 26 18:11:44 np0005537197 systemd[1]: libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope: Deactivated successfully.
Nov 26 18:11:44 np0005537197 podman[210636]: 2025-11-26 23:11:44.818425211 +0000 UTC m=+0.140996657 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 18:11:44 np0005537197 python3.9[210681]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:11:45 np0005537197 systemd[1]: Started libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope.
Nov 26 18:11:45 np0005537197 podman[210689]: 2025-11-26 23:11:45.100659391 +0000 UTC m=+0.092070699 container exec db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, config_id=edpm, distribution-scope=public, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible)
Nov 26 18:11:45 np0005537197 podman[210689]: 2025-11-26 23:11:45.136850946 +0000 UTC m=+0.128262254 container exec_died db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container)
Nov 26 18:11:45 np0005537197 systemd[1]: libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope: Deactivated successfully.
Nov 26 18:11:45 np0005537197 python3.9[210869]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:46 np0005537197 python3.9[211021]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:47 np0005537197 podman[211146]: 2025-11-26 23:11:47.59953844 +0000 UTC m=+0.084646063 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 18:11:47 np0005537197 podman[211145]: 2025-11-26 23:11:47.59919401 +0000 UTC m=+0.084015425 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 18:11:47 np0005537197 python3.9[211212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:11:48 np0005537197 python3.9[211335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198707.1620913-1082-151744841701654/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:49 np0005537197 podman[211459]: 2025-11-26 23:11:49.352557422 +0000 UTC m=+0.088816649 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:11:49 np0005537197 python3.9[211512]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:50 np0005537197 python3.9[211665]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:11:51 np0005537197 python3.9[211743]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:51 np0005537197 python3.9[211895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:11:52 np0005537197 python3.9[211973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.x40_7_k_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:53 np0005537197 python3.9[212125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:11:54 np0005537197 python3.9[212203]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:54 np0005537197 python3.9[212355]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:11:55 np0005537197 python3[212508]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 18:11:56 np0005537197 python3.9[212660]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:11:57 np0005537197 python3.9[212738]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:58 np0005537197 python3.9[212890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:11:58 np0005537197 podman[212940]: 2025-11-26 23:11:58.776688407 +0000 UTC m=+0.093845458 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 26 18:11:58 np0005537197 python3.9[212987]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:11:59 np0005537197 python3.9[213139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:00 np0005537197 python3.9[213217]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:01 np0005537197 python3.9[213369]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:01 np0005537197 python3.9[213447]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:02 np0005537197 python3.9[213599]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:03 np0005537197 podman[213696]: 2025-11-26 23:12:03.379167085 +0000 UTC m=+0.075247911 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:12:03 np0005537197 python3.9[213748]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198722.085436-1207-13810255249685/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:04 np0005537197 python3.9[213900]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:05 np0005537197 python3.9[214052]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:12:06 np0005537197 python3.9[214207]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:07 np0005537197 podman[214331]: 2025-11-26 23:12:07.137452705 +0000 UTC m=+0.098648791 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, io.buildah.version=1.41.4)
Nov 26 18:12:07 np0005537197 python3.9[214380]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:12:08 np0005537197 python3.9[214533]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:12:08 np0005537197 python3.9[214687]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:12:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:12:09.612 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:12:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:12:09.613 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:12:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:12:09.613 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:12:09 np0005537197 python3.9[214842]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:10 np0005537197 systemd[1]: session-26.scope: Deactivated successfully.
Nov 26 18:12:10 np0005537197 systemd[1]: session-26.scope: Consumed 2min 4.302s CPU time.
Nov 26 18:12:10 np0005537197 systemd-logind[819]: Session 26 logged out. Waiting for processes to exit.
Nov 26 18:12:10 np0005537197 systemd-logind[819]: Removed session 26.
Nov 26 18:12:15 np0005537197 podman[214867]: 2025-11-26 23:12:15.840073438 +0000 UTC m=+0.140718642 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 18:12:16 np0005537197 nova_compute[189387]: 2025-11-26 23:12:16.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:16 np0005537197 systemd-logind[819]: New session 27 of user zuul.
Nov 26 18:12:16 np0005537197 systemd[1]: Started Session 27 of User zuul.
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.140 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.141 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.170 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.171 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.171 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.171 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.387 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.388 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5891MB free_disk=72.43940734863281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.389 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.389 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.458 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.459 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.483 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.498 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.500 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 18:12:17 np0005537197 nova_compute[189387]: 2025-11-26 23:12:17.500 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:12:17 np0005537197 podman[215022]: 2025-11-26 23:12:17.753907763 +0000 UTC m=+0.082128019 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:12:17 np0005537197 podman[215023]: 2025-11-26 23:12:17.772004712 +0000 UTC m=+0.094283308 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 18:12:18 np0005537197 python3.9[215088]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:12:18 np0005537197 systemd[1]: Reloading.
Nov 26 18:12:18 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:12:18 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:12:18 np0005537197 nova_compute[189387]: 2025-11-26 23:12:18.484 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:18 np0005537197 nova_compute[189387]: 2025-11-26 23:12:18.485 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:18 np0005537197 nova_compute[189387]: 2025-11-26 23:12:18.485 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:18 np0005537197 nova_compute[189387]: 2025-11-26 23:12:18.486 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 18:12:19 np0005537197 nova_compute[189387]: 2025-11-26 23:12:19.122 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:19 np0005537197 python3.9[215273]: ansible-ansible.builtin.service_facts Invoked
Nov 26 18:12:19 np0005537197 network[215290]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 26 18:12:19 np0005537197 network[215291]: 'network-scripts' will be removed from distribution in near future.
Nov 26 18:12:19 np0005537197 network[215292]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 26 18:12:19 np0005537197 podman[215298]: 2025-11-26 23:12:19.629022305 +0000 UTC m=+0.078093603 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:12:20 np0005537197 nova_compute[189387]: 2025-11-26 23:12:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:20 np0005537197 nova_compute[189387]: 2025-11-26 23:12:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:12:24 np0005537197 python3.9[215595]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:12:25 np0005537197 python3.9[215748]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:26 np0005537197 python3.9[215900]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:27 np0005537197 python3.9[216052]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:12:28 np0005537197 python3.9[216204]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 18:12:28 np0005537197 podman[216356]: 2025-11-26 23:12:28.968440449 +0000 UTC m=+0.089453960 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3)
Nov 26 18:12:29 np0005537197 python3.9[216357]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:12:29 np0005537197 systemd[1]: Reloading.
Nov 26 18:12:29 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:12:29 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:12:29 np0005537197 podman[203621]: time="2025-11-26T23:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 18:12:29 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22540 "" "Go-http-client/1.1"
Nov 26 18:12:29 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3424 "" "Go-http-client/1.1"
Nov 26 18:12:30 np0005537197 python3.9[216565]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:12:31 np0005537197 python3.9[216718]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:12:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:12:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:12:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 18:12:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 18:12:31 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:12:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 18:12:31 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:12:32 np0005537197 python3.9[216873]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:12:33 np0005537197 python3.9[217025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:33 np0005537197 podman[217120]: 2025-11-26 23:12:33.768448313 +0000 UTC m=+0.068967781 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:12:33 np0005537197 python3.9[217160]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198752.474802-125-41256212259361/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:12:35 np0005537197 python3.9[217320]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 26 18:12:36 np0005537197 python3.9[217471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.836 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.837 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.837 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.840 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.841 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c5e20>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.857 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.858 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.859 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:12:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:12:37 np0005537197 python3.9[217593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198755.9882503-171-57990394012205/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:37 np0005537197 podman[217594]: 2025-11-26 23:12:37.335558528 +0000 UTC m=+0.128388818 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 18:12:37 np0005537197 python3.9[217763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:38 np0005537197 python3.9[217884]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198757.4136183-171-164333069887854/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:39 np0005537197 python3.9[218034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:40 np0005537197 python3.9[218155]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198759.1933744-171-167938810682573/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:41 np0005537197 python3.9[218305]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:12:41 np0005537197 python3.9[218457]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:12:42 np0005537197 python3.9[218609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:43 np0005537197 python3.9[218730]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198762.0654309-230-264118574012349/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:43 np0005537197 python3.9[218880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:44 np0005537197 python3.9[218956]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:45 np0005537197 python3.9[219106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:45 np0005537197 python3.9[219227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198764.4572754-230-77375366210370/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:46 np0005537197 podman[219353]: 2025-11-26 23:12:46.240887255 +0000 UTC m=+0.102404932 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 18:12:46 np0005537197 python3.9[219399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:46 np0005537197 python3.9[219527]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198765.8324802-230-59802855356418/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:47 np0005537197 python3.9[219677]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:48 np0005537197 podman[219773]: 2025-11-26 23:12:48.113482425 +0000 UTC m=+0.086501255 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Nov 26 18:12:48 np0005537197 podman[219772]: 2025-11-26 23:12:48.129609848 +0000 UTC m=+0.110380321 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 18:12:48 np0005537197 python3.9[219828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198767.167164-230-34367730045466/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:48 np0005537197 python3.9[219986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:49 np0005537197 python3.9[220107]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198768.4617887-230-162951365043367/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:50 np0005537197 python3.9[220257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:50 np0005537197 podman[220258]: 2025-11-26 23:12:50.509311302 +0000 UTC m=+0.064565746 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:12:50 np0005537197 python3.9[220357]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:51 np0005537197 python3.9[220509]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:52 np0005537197 python3.9[220661]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:12:53 np0005537197 python3.9[220813]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:12:54 np0005537197 python3.9[220965]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:55 np0005537197 python3.9[221088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198773.9061792-349-232396449720997/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:12:55 np0005537197 python3.9[221164]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:56 np0005537197 python3.9[221287]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198773.9061792-349-232396449720997/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:12:57 np0005537197 python3.9[221439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:12:58 np0005537197 python3.9[221562]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764198776.8332646-349-144412599494472/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 26 18:12:59 np0005537197 python3.9[221714]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Nov 26 18:12:59 np0005537197 podman[203621]: time="2025-11-26T23:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 18:12:59 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22540 "" "Go-http-client/1.1"
Nov 26 18:12:59 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3429 "" "Go-http-client/1.1"
Nov 26 18:12:59 np0005537197 podman[221791]: 2025-11-26 23:12:59.803644051 +0000 UTC m=+0.087599528 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 18:13:00 np0005537197 python3.9[221886]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:13:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:13:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:13:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 18:13:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 18:13:01 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:13:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 18:13:01 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:13:01 np0005537197 python3[222038]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:13:01 np0005537197 podman[222075]: 2025-11-26 23:13:01.787520176 +0000 UTC m=+0.059840328 container create d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 18:13:01 np0005537197 podman[222075]: 2025-11-26 23:13:01.757599967 +0000 UTC m=+0.029920149 image pull 743c1960518ee2a8df257b87dd40a31faa57a99c6d0aa394baae4cd418c3c2b2 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 26 18:13:01 np0005537197 python3[222038]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Nov 26 18:13:02 np0005537197 python3.9[222266]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:13:03 np0005537197 python3.9[222420]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:04 np0005537197 podman[222543]: 2025-11-26 23:13:04.502775028 +0000 UTC m=+0.067602773 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 18:13:04 np0005537197 python3.9[222594]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198783.935542-427-89584345327654/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:05 np0005537197 python3.9[222670]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:13:05 np0005537197 systemd[1]: Reloading.
Nov 26 18:13:05 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:13:05 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:13:06 np0005537197 python3.9[222780]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:13:06 np0005537197 systemd[1]: Reloading.
Nov 26 18:13:07 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:13:07 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:13:07 np0005537197 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 26 18:13:07 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:13:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:07 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:07 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.
Nov 26 18:13:07 np0005537197 podman[222821]: 2025-11-26 23:13:07.50602801 +0000 UTC m=+0.196430773 container init d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 26 18:13:07 np0005537197 podman[222834]: 2025-11-26 23:13:07.507814741 +0000 UTC m=+0.109257519 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + sudo -E kolla_set_configs
Nov 26 18:13:07 np0005537197 podman[222821]: 2025-11-26 23:13:07.5491957 +0000 UTC m=+0.239598393 container start d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 18:13:07 np0005537197 podman[222821]: ceilometer_agent_ipmi
Nov 26 18:13:07 np0005537197 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Validating config file
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Copying service configuration files
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 26 18:13:07 np0005537197 podman[222863]: 2025-11-26 23:13:07.636511927 +0000 UTC m=+0.072831622 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: INFO:__main__:Writing out command to execute
Nov 26 18:13:07 np0005537197 systemd[1]: d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-3acb5e90ff1359f1.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:13:07 np0005537197 systemd[1]: d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-3acb5e90ff1359f1.service: Failed with result 'exit-code'.
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: ++ cat /run_command
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + ARGS=
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + sudo kolla_copy_cacerts
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + [[ ! -n '' ]]
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + . kolla_extend_start
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + umask 0022
Nov 26 18:13:07 np0005537197 ceilometer_agent_ipmi[222837]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.509 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.509 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.509 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.509 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.509 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.509 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.509 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.510 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 python3.9[223039]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.511 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.512 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.513 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.514 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.515 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.516 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.517 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.518 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.519 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.520 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.521 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.522 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.523 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.523 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.542 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.543 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.544 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 26 18:13:08 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:08.620 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpvmfoum5s/privsep.sock']
Nov 26 18:13:09 np0005537197 python3.9[223199]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.382 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.383 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvmfoum5s/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.241 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.248 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.252 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.252 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.523 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.523 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.525 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.525 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.525 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.525 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.525 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.526 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.526 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.526 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.526 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.527 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.527 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.532 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.532 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.533 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.533 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.533 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.533 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.533 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.533 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.534 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.534 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.534 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.534 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.534 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.535 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.535 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.535 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.536 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.536 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.536 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.536 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.536 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.536 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.537 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.537 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.537 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.537 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.537 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.538 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.538 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.538 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.538 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.538 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.539 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.539 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.539 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.540 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.540 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.540 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.540 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.540 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.541 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.541 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.541 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.541 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.541 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.541 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.542 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.542 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.542 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.542 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.542 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.543 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.543 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.543 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.543 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.543 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.543 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.544 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.544 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.544 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.544 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.544 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.545 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.545 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.545 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.545 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.545 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.546 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.546 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.546 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.546 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.546 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.547 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.548 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.549 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.550 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.550 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.550 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.550 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.550 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.551 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.551 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.551 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.551 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.551 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.551 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.552 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.552 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.552 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.552 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.552 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.553 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.553 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.553 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.553 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.553 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.554 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.554 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.554 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.554 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.554 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.555 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.555 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.555 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.555 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.555 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.555 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.556 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.556 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.556 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.556 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.556 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.557 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.557 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.557 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.557 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.557 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.557 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.558 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.558 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.558 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.558 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.558 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.559 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.559 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.559 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.559 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.559 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.559 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.560 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.560 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.560 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.560 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.560 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.560 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.561 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.561 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.561 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.561 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.561 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.562 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.562 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.562 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.562 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.562 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.563 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.563 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.563 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.563 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.563 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.563 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.564 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.564 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.564 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.564 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.564 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.564 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.565 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.565 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.565 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.565 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.565 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.566 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.566 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.566 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.566 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.567 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.567 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.567 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.567 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.568 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.568 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.568 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.568 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.568 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.569 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.569 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.569 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.569 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.570 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.570 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.570 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.570 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.570 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.571 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.571 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.571 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 26 18:13:09 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:09.575 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 26 18:13:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:13:09.613 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:13:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:13:09.615 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:13:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:13:09.615 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:13:10 np0005537197 python3[223356]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Nov 26 18:13:10 np0005537197 podman[223394]: 2025-11-26 23:13:10.698544948 +0000 UTC m=+0.068103188 container create 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, architecture=x86_64, distribution-scope=public, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vendor=Red Hat, Inc.)
Nov 26 18:13:10 np0005537197 podman[223394]: 2025-11-26 23:13:10.661574245 +0000 UTC m=+0.031132525 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 26 18:13:10 np0005537197 python3[223356]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Nov 26 18:13:11 np0005537197 python3.9[223582]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:13:12 np0005537197 python3.9[223736]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:13 np0005537197 python3.9[223887]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764198793.0253348-489-236339022705527/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:14 np0005537197 python3.9[223963]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 26 18:13:14 np0005537197 systemd[1]: Reloading.
Nov 26 18:13:14 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:13:14 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:13:15 np0005537197 python3.9[224074]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 26 18:13:15 np0005537197 systemd[1]: Reloading.
Nov 26 18:13:15 np0005537197 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 26 18:13:15 np0005537197 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 26 18:13:16 np0005537197 nova_compute[189387]: 2025-11-26 23:13:16.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:16 np0005537197 systemd[1]: Starting kepler container...
Nov 26 18:13:16 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:13:16 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.
Nov 26 18:13:16 np0005537197 podman[224113]: 2025-11-26 23:13:16.365304843 +0000 UTC m=+0.177701284 container init 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 18:13:16 np0005537197 kepler[224129]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 26 18:13:16 np0005537197 podman[224113]: 2025-11-26 23:13:16.404283083 +0000 UTC m=+0.216679484 container start 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler)
Nov 26 18:13:16 np0005537197 podman[224113]: kepler
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.413795       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.413966       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.414062       1 config.go:295] kernel version: 5.14
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.414762       1 power.go:78] Unable to obtain power, use estimate method
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.414788       1 redfish.go:169] failed to get redfish credential file path
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.415234       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.415249       1 power.go:79] using none to obtain power
Nov 26 18:13:16 np0005537197 kepler[224129]: E1126 23:13:16.415265       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 26 18:13:16 np0005537197 kepler[224129]: E1126 23:13:16.415287       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 26 18:13:16 np0005537197 kepler[224129]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 26 18:13:16 np0005537197 kepler[224129]: I1126 23:13:16.417563       1 exporter.go:84] Number of CPUs: 8
Nov 26 18:13:16 np0005537197 systemd[1]: Started kepler container.
Nov 26 18:13:16 np0005537197 podman[224132]: 2025-11-26 23:13:16.49854345 +0000 UTC m=+0.170128357 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 26 18:13:16 np0005537197 podman[224154]: 2025-11-26 23:13:16.523786085 +0000 UTC m=+0.098793528 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, distribution-scope=public, container_name=kepler, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, version=9.4, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=)
Nov 26 18:13:16 np0005537197 systemd[1]: 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad-1783c0767d5a09fd.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:13:16 np0005537197 systemd[1]: 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad-1783c0767d5a09fd.service: Failed with result 'exit-code'.
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.053210       1 watcher.go:83] Using in cluster k8s config
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.053249       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 26 18:13:17 np0005537197 kepler[224129]: E1126 23:13:17.053314       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.058480       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.058517       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.065989       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.066020       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.077873       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.077920       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.077935       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.090974       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091009       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091016       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091021       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091028       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091040       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091153       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091180       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091200       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091218       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.091462       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 26 18:13:17 np0005537197 kepler[224129]: I1126 23:13:17.092540       1 exporter.go:208] Started Kepler in 679.01549ms
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.167 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.168 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.168 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.169 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 18:13:17 np0005537197 python3.9[224350]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:13:17 np0005537197 systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 26 18:13:17 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:17.603 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.658 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.659 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5758MB free_disk=72.44086456298828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.660 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.660 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:13:17 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:17.706 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 26 18:13:17 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:17.706 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 26 18:13:17 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:17.706 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 26 18:13:17 np0005537197 ceilometer_agent_ipmi[222837]: 2025-11-26 23:13:17.720 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.727 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.727 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.765 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.782 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.786 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 18:13:17 np0005537197 nova_compute[189387]: 2025-11-26 23:13:17.787 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:13:17 np0005537197 systemd[1]: libpod-d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.scope: Deactivated successfully.
Nov 26 18:13:17 np0005537197 systemd[1]: libpod-d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.scope: Consumed 2.268s CPU time.
Nov 26 18:13:17 np0005537197 podman[224354]: 2025-11-26 23:13:17.876517824 +0000 UTC m=+0.348305993 container died d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 26 18:13:17 np0005537197 systemd[1]: d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-3acb5e90ff1359f1.timer: Deactivated successfully.
Nov 26 18:13:17 np0005537197 systemd[1]: Stopped /usr/bin/podman healthcheck run d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.
Nov 26 18:13:17 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-userdata-shm.mount: Deactivated successfully.
Nov 26 18:13:17 np0005537197 systemd[1]: var-lib-containers-storage-overlay-18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8-merged.mount: Deactivated successfully.
Nov 26 18:13:17 np0005537197 podman[224354]: 2025-11-26 23:13:17.960652691 +0000 UTC m=+0.432440860 container cleanup d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm)
Nov 26 18:13:17 np0005537197 podman[224354]: ceilometer_agent_ipmi
Nov 26 18:13:18 np0005537197 podman[224381]: ceilometer_agent_ipmi
Nov 26 18:13:18 np0005537197 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 26 18:13:18 np0005537197 systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 26 18:13:18 np0005537197 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 26 18:13:18 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:13:18 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:18 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:18 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:18 np0005537197 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18aac0b77076187fdef0cf1c2252a3e75e09bd690fdff7f0ad5faf9c79af15b8/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 26 18:13:18 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.
Nov 26 18:13:18 np0005537197 podman[224403]: 2025-11-26 23:13:18.397035753 +0000 UTC m=+0.160253542 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 18:13:18 np0005537197 podman[224391]: 2025-11-26 23:13:18.401724369 +0000 UTC m=+0.275706799 container init d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS)
Nov 26 18:13:18 np0005537197 podman[224404]: 2025-11-26 23:13:18.404476617 +0000 UTC m=+0.149276748 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, name=ubi9-minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter)
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + sudo -E kolla_set_configs
Nov 26 18:13:18 np0005537197 podman[224391]: 2025-11-26 23:13:18.445499096 +0000 UTC m=+0.319481526 container start d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:13:18 np0005537197 podman[224391]: ceilometer_agent_ipmi
Nov 26 18:13:18 np0005537197 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Validating config file
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Copying service configuration files
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: INFO:__main__:Writing out command to execute
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: ++ cat /run_command
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + ARGS=
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + sudo kolla_copy_cacerts
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + [[ ! -n '' ]]
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + . kolla_extend_start
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + umask 0022
Nov 26 18:13:18 np0005537197 ceilometer_agent_ipmi[224413]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 26 18:13:18 np0005537197 podman[224450]: 2025-11-26 23:13:18.612306896 +0000 UTC m=+0.141348600 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 18:13:18 np0005537197 systemd[1]: d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-4e472a0b76970b95.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:13:18 np0005537197 systemd[1]: d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-4e472a0b76970b95.service: Failed with result 'exit-code'.
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.398 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.398 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.398 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.398 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.398 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.399 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.400 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.401 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.402 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.403 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.404 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.404 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.404 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.404 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.404 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.404 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.405 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.405 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.406 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.406 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.406 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.406 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.406 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.407 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.407 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.407 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.407 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.407 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.408 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.409 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.410 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.411 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.412 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.413 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.414 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.415 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.416 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.416 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.416 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.416 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.416 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.438 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.440 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.442 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 26 18:13:19 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:19.470 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1nlj1r3w/privsep.sock']
Nov 26 18:13:19 np0005537197 python3.9[224625]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 18:13:19 np0005537197 nova_compute[189387]: 2025-11-26 23:13:19.787 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:19 np0005537197 nova_compute[189387]: 2025-11-26 23:13:19.788 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 18:13:19 np0005537197 nova_compute[189387]: 2025-11-26 23:13:19.788 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 18:13:19 np0005537197 systemd[1]: Stopping kepler container...
Nov 26 18:13:19 np0005537197 nova_compute[189387]: 2025-11-26 23:13:19.806 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 18:13:19 np0005537197 nova_compute[189387]: 2025-11-26 23:13:19.807 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:19 np0005537197 kepler[224129]: I1126 23:13:19.880379       1 exporter.go:218] Received shutdown signal
Nov 26 18:13:19 np0005537197 kepler[224129]: I1126 23:13:19.880545       1 exporter.go:226] Exiting...
Nov 26 18:13:20 np0005537197 systemd[1]: libpod-331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.scope: Deactivated successfully.
Nov 26 18:13:20 np0005537197 podman[224637]: 2025-11-26 23:13:20.07916018 +0000 UTC m=+0.266370573 container died 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, managed_by=edpm_ansible, name=ubi9, maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_id=edpm, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 18:13:20 np0005537197 systemd[1]: 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad-1783c0767d5a09fd.timer: Deactivated successfully.
Nov 26 18:13:20 np0005537197 systemd[1]: Stopped /usr/bin/podman healthcheck run 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.
Nov 26 18:13:20 np0005537197 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad-userdata-shm.mount: Deactivated successfully.
Nov 26 18:13:20 np0005537197 systemd[1]: var-lib-containers-storage-overlay-f32103f762c38998fa40dca9a3945bc2f6c8e0562769cbc36fdad72c8e26d6f3-merged.mount: Deactivated successfully.
Nov 26 18:13:20 np0005537197 nova_compute[189387]: 2025-11-26 23:13:20.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:20 np0005537197 nova_compute[189387]: 2025-11-26 23:13:20.128 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:20 np0005537197 podman[224637]: 2025-11-26 23:13:20.134984854 +0000 UTC m=+0.322195207 container cleanup 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, name=ubi9, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Nov 26 18:13:20 np0005537197 podman[224637]: kepler
Nov 26 18:13:20 np0005537197 nova_compute[189387]: 2025-11-26 23:13:20.149 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:20 np0005537197 nova_compute[189387]: 2025-11-26 23:13:20.149 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:20 np0005537197 nova_compute[189387]: 2025-11-26 23:13:20.149 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:20 np0005537197 nova_compute[189387]: 2025-11-26 23:13:20.150 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.174 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.175 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1nlj1r3w/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.055 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.062 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.066 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.067 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 26 18:13:20 np0005537197 podman[224665]: kepler
Nov 26 18:13:20 np0005537197 systemd[1]: edpm_kepler.service: Deactivated successfully.
Nov 26 18:13:20 np0005537197 systemd[1]: Stopped kepler container.
Nov 26 18:13:20 np0005537197 systemd[1]: Starting kepler container...
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.289 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.290 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.291 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.291 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.291 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.291 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.291 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.291 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.292 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.292 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.292 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.292 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.292 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.295 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.295 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.296 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.296 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.296 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.296 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.296 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.296 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.296 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.297 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.297 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.297 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.297 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.297 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.297 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.298 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.298 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.298 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.298 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.298 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.298 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.298 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.299 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.300 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.301 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.302 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.302 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.302 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.302 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.302 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.302 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.302 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.303 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.304 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.304 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.304 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.304 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.304 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.304 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.304 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.305 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.306 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.307 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.308 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.308 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.308 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.308 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.308 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.308 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.308 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.309 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.309 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.309 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.309 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.309 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.309 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.309 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.310 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.311 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.311 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.311 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.311 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.311 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.311 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.311 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.312 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.313 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.313 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.313 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.313 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.313 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.313 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.313 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.314 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.315 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.316 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.317 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.318 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.319 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.320 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.320 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 26 18:13:20 np0005537197 ceilometer_agent_ipmi[224413]: 2025-11-26 23:13:20.323 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 26 18:13:20 np0005537197 systemd[1]: Started libcrun container.
Nov 26 18:13:20 np0005537197 systemd[1]: Started /usr/bin/podman healthcheck run 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.
Nov 26 18:13:20 np0005537197 podman[224680]: 2025-11-26 23:13:20.400937745 +0000 UTC m=+0.155445890 container init 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible, name=ubi9, version=9.4, release=1214.1726694543, release-0.7.12=)
Nov 26 18:13:20 np0005537197 kepler[224697]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 26 18:13:20 np0005537197 podman[224680]: 2025-11-26 23:13:20.438756594 +0000 UTC m=+0.193264729 container start 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, config_id=edpm, vcs-type=git, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9)
Nov 26 18:13:20 np0005537197 podman[224680]: kepler
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.447070       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.447366       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.447402       1 config.go:295] kernel version: 5.14
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.448401       1 power.go:78] Unable to obtain power, use estimate method
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.448446       1 redfish.go:169] failed to get redfish credential file path
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.449326       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.449352       1 power.go:79] using none to obtain power
Nov 26 18:13:20 np0005537197 kepler[224697]: E1126 23:13:20.449383       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 26 18:13:20 np0005537197 kepler[224697]: E1126 23:13:20.449436       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 26 18:13:20 np0005537197 systemd[1]: Started kepler container.
Nov 26 18:13:20 np0005537197 kepler[224697]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 26 18:13:20 np0005537197 kepler[224697]: I1126 23:13:20.454013       1 exporter.go:84] Number of CPUs: 8
Nov 26 18:13:20 np0005537197 podman[224708]: 2025-11-26 23:13:20.550946941 +0000 UTC m=+0.088248195 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, name=ubi9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 18:13:20 np0005537197 systemd[1]: 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad-51a1bbbd9a908d22.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:13:20 np0005537197 systemd[1]: 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad-51a1bbbd9a908d22.service: Failed with result 'exit-code'.
Nov 26 18:13:20 np0005537197 podman[224749]: 2025-11-26 23:13:20.643012604 +0000 UTC m=+0.062044161 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.080617       1 watcher.go:83] Using in cluster k8s config
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.080666       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 26 18:13:21 np0005537197 kepler[224697]: E1126 23:13:21.080733       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.087915       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.088477       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.096548       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.096592       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.111411       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.111476       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.111502       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.121263       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.121314       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.121324       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.121332       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.121343       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.121360       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.121691       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.122337       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.122437       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.122468       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.123395       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 26 18:13:21 np0005537197 kepler[224697]: I1126 23:13:21.124492       1 exporter.go:208] Started Kepler in 677.9203ms
Nov 26 18:13:21 np0005537197 nova_compute[189387]: 2025-11-26 23:13:21.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:13:21 np0005537197 python3.9[224913]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 26 18:13:22 np0005537197 python3.9[225069]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 26 18:13:23 np0005537197 python3.9[225234]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:24 np0005537197 systemd[1]: Started libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope.
Nov 26 18:13:24 np0005537197 podman[225235]: 2025-11-26 23:13:24.126592118 +0000 UTC m=+0.130901531 container exec 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 18:13:24 np0005537197 podman[225235]: 2025-11-26 23:13:24.13532504 +0000 UTC m=+0.139634403 container exec_died 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 18:13:24 np0005537197 systemd[1]: libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope: Deactivated successfully.
Nov 26 18:13:25 np0005537197 python3.9[225420]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:25 np0005537197 systemd[1]: Started libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope.
Nov 26 18:13:25 np0005537197 podman[225421]: 2025-11-26 23:13:25.182404907 +0000 UTC m=+0.120573968 container exec 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:13:25 np0005537197 podman[225421]: 2025-11-26 23:13:25.214378743 +0000 UTC m=+0.152547774 container exec_died 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 18:13:25 np0005537197 systemd[1]: libpod-conmon-3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0.scope: Deactivated successfully.
Nov 26 18:13:26 np0005537197 python3.9[225601]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:27 np0005537197 python3.9[225753]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 26 18:13:28 np0005537197 python3.9[225918]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:28 np0005537197 systemd[1]: Started libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope.
Nov 26 18:13:28 np0005537197 podman[225919]: 2025-11-26 23:13:28.277989685 +0000 UTC m=+0.106516776 container exec b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 18:13:28 np0005537197 podman[225919]: 2025-11-26 23:13:28.312033276 +0000 UTC m=+0.140560317 container exec_died b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 26 18:13:28 np0005537197 systemd[1]: libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope: Deactivated successfully.
Nov 26 18:13:29 np0005537197 python3.9[226101]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:29 np0005537197 systemd[1]: Started libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope.
Nov 26 18:13:29 np0005537197 podman[226102]: 2025-11-26 23:13:29.672387195 +0000 UTC m=+0.145498357 container exec b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 18:13:29 np0005537197 podman[226102]: 2025-11-26 23:13:29.706412344 +0000 UTC m=+0.179523516 container exec_died b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 18:13:29 np0005537197 podman[203621]: time="2025-11-26T23:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 18:13:29 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 26 18:13:29 np0005537197 systemd[1]: libpod-conmon-b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b.scope: Deactivated successfully.
Nov 26 18:13:29 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4264 "" "Go-http-client/1.1"
Nov 26 18:13:30 np0005537197 podman[226256]: 2025-11-26 23:13:30.69437693 +0000 UTC m=+0.138945474 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 18:13:30 np0005537197 python3.9[226303]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 18:13:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:13:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:13:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 18:13:31 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:13:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 18:13:31 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:13:32 np0005537197 python3.9[226455]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 26 18:13:33 np0005537197 python3.9[226619]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:33 np0005537197 systemd[1]: Started libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope.
Nov 26 18:13:33 np0005537197 podman[226620]: 2025-11-26 23:13:33.338988647 +0000 UTC m=+0.142005785 container exec 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 18:13:33 np0005537197 podman[226620]: 2025-11-26 23:13:33.373330275 +0000 UTC m=+0.176347363 container exec_died 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:13:33 np0005537197 systemd[1]: libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope: Deactivated successfully.
Nov 26 18:13:34 np0005537197 python3.9[226798]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:34 np0005537197 podman[226799]: 2025-11-26 23:13:34.847750469 +0000 UTC m=+0.127742807 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 18:13:34 np0005537197 systemd[1]: Started libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope.
Nov 26 18:13:34 np0005537197 podman[226805]: 2025-11-26 23:13:34.909506663 +0000 UTC m=+0.146050263 container exec 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 18:13:34 np0005537197 podman[226805]: 2025-11-26 23:13:34.943970253 +0000 UTC m=+0.180513813 container exec_died 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS)
Nov 26 18:13:34 np0005537197 systemd[1]: libpod-conmon-2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531.scope: Deactivated successfully.
Nov 26 18:13:36 np0005537197 python3.9[227002]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:37 np0005537197 python3.9[227154]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 26 18:13:37 np0005537197 podman[227155]: 2025-11-26 23:13:37.813042954 +0000 UTC m=+0.107150183 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527)
Nov 26 18:13:39 np0005537197 python3.9[227337]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:39 np0005537197 systemd[1]: Started libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope.
Nov 26 18:13:39 np0005537197 podman[227338]: 2025-11-26 23:13:39.17933351 +0000 UTC m=+0.133491209 container exec bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 18:13:39 np0005537197 podman[227338]: 2025-11-26 23:13:39.213031981 +0000 UTC m=+0.167189590 container exec_died bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Nov 26 18:13:39 np0005537197 systemd[1]: libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope: Deactivated successfully.
Nov 26 18:13:40 np0005537197 python3.9[227516]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:40 np0005537197 systemd[1]: Started libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope.
Nov 26 18:13:40 np0005537197 podman[227517]: 2025-11-26 23:13:40.582880932 +0000 UTC m=+0.153382046 container exec bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527)
Nov 26 18:13:40 np0005537197 podman[227517]: 2025-11-26 23:13:40.619020487 +0000 UTC m=+0.189521651 container exec_died bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Nov 26 18:13:40 np0005537197 systemd[1]: libpod-conmon-bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517.scope: Deactivated successfully.
Nov 26 18:13:41 np0005537197 python3.9[227698]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:42 np0005537197 python3.9[227850]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 26 18:13:44 np0005537197 python3.9[228012]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:44 np0005537197 systemd[1]: Started libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope.
Nov 26 18:13:44 np0005537197 podman[228013]: 2025-11-26 23:13:44.219407169 +0000 UTC m=+0.135773850 container exec 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 18:13:44 np0005537197 podman[228013]: 2025-11-26 23:13:44.252630908 +0000 UTC m=+0.168997539 container exec_died 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:13:44 np0005537197 systemd[1]: libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope: Deactivated successfully.
Nov 26 18:13:45 np0005537197 python3.9[228193]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:45 np0005537197 systemd[1]: Started libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope.
Nov 26 18:13:45 np0005537197 podman[228194]: 2025-11-26 23:13:45.528832202 +0000 UTC m=+0.154940347 container exec 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:13:45 np0005537197 podman[228194]: 2025-11-26 23:13:45.563999362 +0000 UTC m=+0.190107457 container exec_died 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 18:13:45 np0005537197 systemd[1]: libpod-conmon-413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262.scope: Deactivated successfully.
Nov 26 18:13:46 np0005537197 python3.9[228375]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:46 np0005537197 podman[228376]: 2025-11-26 23:13:46.864896879 +0000 UTC m=+0.144417578 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 18:13:47 np0005537197 python3.9[228551]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 26 18:13:48 np0005537197 podman[228690]: 2025-11-26 23:13:48.832568433 +0000 UTC m=+0.109961938 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, config_id=edpm, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container)
Nov 26 18:13:48 np0005537197 podman[228689]: 2025-11-26 23:13:48.839670791 +0000 UTC m=+0.124110462 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 26 18:13:48 np0005537197 podman[228687]: 2025-11-26 23:13:48.843401369 +0000 UTC m=+0.126492854 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:13:48 np0005537197 systemd[1]: d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-4e472a0b76970b95.service: Main process exited, code=exited, status=1/FAILURE
Nov 26 18:13:48 np0005537197 systemd[1]: d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61-4e472a0b76970b95.service: Failed with result 'exit-code'.
Nov 26 18:13:49 np0005537197 python3.9[228767]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:49 np0005537197 systemd[1]: Started libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope.
Nov 26 18:13:49 np0005537197 podman[228771]: 2025-11-26 23:13:49.192457627 +0000 UTC m=+0.120575979 container exec 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 18:13:49 np0005537197 podman[228771]: 2025-11-26 23:13:49.225438918 +0000 UTC m=+0.153557280 container exec_died 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 18:13:49 np0005537197 systemd[1]: libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope: Deactivated successfully.
Nov 26 18:13:50 np0005537197 python3.9[228955]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:50 np0005537197 systemd[1]: Started libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope.
Nov 26 18:13:50 np0005537197 podman[228956]: 2025-11-26 23:13:50.523672955 +0000 UTC m=+0.143495384 container exec 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 18:13:50 np0005537197 podman[228956]: 2025-11-26 23:13:50.558378172 +0000 UTC m=+0.178200601 container exec_died 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 18:13:50 np0005537197 systemd[1]: libpod-conmon-28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897.scope: Deactivated successfully.
Nov 26 18:13:50 np0005537197 podman[228988]: 2025-11-26 23:13:50.798645894 +0000 UTC m=+0.114854507 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:13:50 np0005537197 podman[228986]: 2025-11-26 23:13:50.815060108 +0000 UTC m=+0.138178614 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 18:13:51 np0005537197 python3.9[229178]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:52 np0005537197 python3.9[229330]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 26 18:13:54 np0005537197 python3.9[229495]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:54 np0005537197 systemd[1]: Started libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope.
Nov 26 18:13:54 np0005537197 podman[229496]: 2025-11-26 23:13:54.372878802 +0000 UTC m=+0.148547218 container exec db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Nov 26 18:13:54 np0005537197 podman[229496]: 2025-11-26 23:13:54.382207808 +0000 UTC m=+0.157876204 container exec_died db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, version=9.6, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Nov 26 18:13:54 np0005537197 systemd[1]: libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope: Deactivated successfully.
Nov 26 18:13:55 np0005537197 python3.9[229678]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:55 np0005537197 systemd[1]: Started libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope.
Nov 26 18:13:55 np0005537197 podman[229679]: 2025-11-26 23:13:55.67679947 +0000 UTC m=+0.133746677 container exec db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, vcs-type=git, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 26 18:13:55 np0005537197 podman[229679]: 2025-11-26 23:13:55.708165079 +0000 UTC m=+0.165112286 container exec_died db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 18:13:55 np0005537197 systemd[1]: libpod-conmon-db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec.scope: Deactivated successfully.
Nov 26 18:13:56 np0005537197 python3.9[229861]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:13:57 np0005537197 python3.9[230013]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 26 18:13:58 np0005537197 python3.9[230177]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:13:59 np0005537197 systemd[1]: Started libpod-conmon-d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.scope.
Nov 26 18:13:59 np0005537197 podman[230178]: 2025-11-26 23:13:59.154644673 +0000 UTC m=+0.132469024 container exec d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 26 18:13:59 np0005537197 podman[230178]: 2025-11-26 23:13:59.190028368 +0000 UTC m=+0.167852659 container exec_died d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 18:13:59 np0005537197 systemd[1]: libpod-conmon-d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.scope: Deactivated successfully.
Nov 26 18:13:59 np0005537197 podman[203621]: time="2025-11-26T23:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 18:13:59 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Nov 26 18:13:59 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4271 "" "Go-http-client/1.1"
Nov 26 18:14:00 np0005537197 python3.9[230360]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:14:00 np0005537197 systemd[1]: Started libpod-conmon-d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.scope.
Nov 26 18:14:00 np0005537197 podman[230361]: 2025-11-26 23:14:00.472407796 +0000 UTC m=+0.106968348 container exec d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 18:14:00 np0005537197 podman[230361]: 2025-11-26 23:14:00.506670151 +0000 UTC m=+0.141230683 container exec_died d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 26 18:14:00 np0005537197 systemd[1]: libpod-conmon-d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61.scope: Deactivated successfully.
Nov 26 18:14:01 np0005537197 podman[230511]: 2025-11-26 23:14:01.412883907 +0000 UTC m=+0.129665269 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 18:14:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:14:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:14:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 18:14:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 18:14:01 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:14:01 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 18:14:01 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:14:01 np0005537197 python3.9[230558]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:02 np0005537197 python3.9[230710]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 26 18:14:03 np0005537197 python3.9[230876]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:14:03 np0005537197 systemd[1]: Started libpod-conmon-331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.scope.
Nov 26 18:14:03 np0005537197 podman[230877]: 2025-11-26 23:14:03.975897726 +0000 UTC m=+0.147953232 container exec 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, release=1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, config_id=edpm, vcs-type=git)
Nov 26 18:14:04 np0005537197 podman[230877]: 2025-11-26 23:14:04.01195214 +0000 UTC m=+0.184007636 container exec_died 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, config_id=edpm, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, build-date=2024-09-18T21:23:30, vcs-type=git, version=9.4, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 18:14:04 np0005537197 systemd[1]: libpod-conmon-331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.scope: Deactivated successfully.
Nov 26 18:14:05 np0005537197 podman[231057]: 2025-11-26 23:14:05.084657025 +0000 UTC m=+0.126686569 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 18:14:05 np0005537197 python3.9[231058]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 26 18:14:05 np0005537197 systemd[1]: Started libpod-conmon-331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.scope.
Nov 26 18:14:05 np0005537197 podman[231081]: 2025-11-26 23:14:05.365301203 +0000 UTC m=+0.155752017 container exec 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Nov 26 18:14:05 np0005537197 podman[231081]: 2025-11-26 23:14:05.401240894 +0000 UTC m=+0.191691678 container exec_died 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, container_name=kepler, name=ubi9, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container)
Nov 26 18:14:05 np0005537197 systemd[1]: libpod-conmon-331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad.scope: Deactivated successfully.
Nov 26 18:14:06 np0005537197 python3.9[231263]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:07 np0005537197 python3.9[231415]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:08 np0005537197 podman[231539]: 2025-11-26 23:14:08.66471437 +0000 UTC m=+0.115409311 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, managed_by=edpm_ansible)
Nov 26 18:14:08 np0005537197 python3.9[231587]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:14:09.615 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:14:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:14:09.616 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:14:09 np0005537197 ovn_metadata_agent[106590]: 2025-11-26 23:14:09.617 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:14:09 np0005537197 python3.9[231710]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764198848.1187882-844-239433463901681/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:11 np0005537197 python3.9[231862]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:12 np0005537197 python3.9[232014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:12 np0005537197 python3.9[232092]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:14 np0005537197 python3.9[232244]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:14 np0005537197 python3.9[232322]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.fe4pxla_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:15 np0005537197 python3.9[232474]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:16 np0005537197 nova_compute[189387]: 2025-11-26 23:14:16.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:16 np0005537197 nova_compute[189387]: 2025-11-26 23:14:16.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 18:14:16 np0005537197 nova_compute[189387]: 2025-11-26 23:14:16.145 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 18:14:16 np0005537197 nova_compute[189387]: 2025-11-26 23:14:16.147 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:16 np0005537197 nova_compute[189387]: 2025-11-26 23:14:16.148 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 18:14:16 np0005537197 nova_compute[189387]: 2025-11-26 23:14:16.164 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:16 np0005537197 python3.9[232552]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:17 np0005537197 nova_compute[189387]: 2025-11-26 23:14:17.178 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:17 np0005537197 podman[232676]: 2025-11-26 23:14:17.673792095 +0000 UTC m=+0.210315440 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 26 18:14:17 np0005537197 python3.9[232724]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.159 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.159 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.159 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.160 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.597 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.599 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5681MB free_disk=72.44149398803711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.599 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.600 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.772 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.772 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.873 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.961 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.962 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 18:14:18 np0005537197 nova_compute[189387]: 2025-11-26 23:14:18.978 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 18:14:19 np0005537197 nova_compute[189387]: 2025-11-26 23:14:19.010 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 18:14:19 np0005537197 nova_compute[189387]: 2025-11-26 23:14:19.044 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 18:14:19 np0005537197 nova_compute[189387]: 2025-11-26 23:14:19.063 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 18:14:19 np0005537197 nova_compute[189387]: 2025-11-26 23:14:19.066 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 18:14:19 np0005537197 python3[232883]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 26 18:14:19 np0005537197 nova_compute[189387]: 2025-11-26 23:14:19.067 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.467s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 18:14:19 np0005537197 podman[232986]: 2025-11-26 23:14:19.829755036 +0000 UTC m=+0.103326663 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm)
Nov 26 18:14:19 np0005537197 podman[232984]: 2025-11-26 23:14:19.849992551 +0000 UTC m=+0.134009923 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 18:14:19 np0005537197 podman[232985]: 2025-11-26 23:14:19.85488792 +0000 UTC m=+0.131615620 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm)
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.068 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.069 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.069 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.088 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:20 np0005537197 nova_compute[189387]: 2025-11-26 23:14:20.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 18:14:20 np0005537197 python3.9[233088]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:20 np0005537197 python3.9[233166]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:21 np0005537197 nova_compute[189387]: 2025-11-26 23:14:21.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:21 np0005537197 nova_compute[189387]: 2025-11-26 23:14:21.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:21 np0005537197 podman[233283]: 2025-11-26 23:14:21.851999581 +0000 UTC m=+0.133431537 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public)
Nov 26 18:14:21 np0005537197 podman[233288]: 2025-11-26 23:14:21.852015553 +0000 UTC m=+0.128251862 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 18:14:22 np0005537197 python3.9[233358]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:22 np0005537197 python3.9[233436]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:23 np0005537197 nova_compute[189387]: 2025-11-26 23:14:23.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 18:14:23 np0005537197 python3.9[233588]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:24 np0005537197 python3.9[233666]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:25 np0005537197 python3.9[233818]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:26 np0005537197 python3.9[233896]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:27 np0005537197 python3.9[234048]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:28 np0005537197 python3.9[234173]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764198866.4315047-969-244553699535391/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:29 np0005537197 python3.9[234325]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:29 np0005537197 podman[203621]: time="2025-11-26T23:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 18:14:29 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 18:14:29 np0005537197 podman[203621]: @ - - [26/Nov/2025:23:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4283 "" "Go-http-client/1.1"
Nov 26 18:14:30 np0005537197 python3.9[234477]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:14:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:14:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 18:14:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 18:14:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 18:14:31 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:14:31 np0005537197 openstack_network_exporter[205787]: ERROR   23:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 18:14:31 np0005537197 openstack_network_exporter[205787]: 
Nov 26 18:14:31 np0005537197 podman[234604]: 2025-11-26 23:14:31.741566424 +0000 UTC m=+0.117764227 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 26 18:14:31 np0005537197 python3.9[234650]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:33 np0005537197 python3.9[234802]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:14:34 np0005537197 python3.9[234955]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 26 18:14:35 np0005537197 podman[235109]: 2025-11-26 23:14:35.367579798 +0000 UTC m=+0.122390267 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 18:14:35 np0005537197 python3.9[235110]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 18:14:36 np0005537197 python3.9[235288]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.837 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.838 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.838 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.839 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.845 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:36 np0005537197 ceilometer_agent_compute[200139]: 2025-11-26 23:14:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 18:14:37 np0005537197 systemd-logind[819]: Session 27 logged out. Waiting for processes to exit.
Nov 26 18:14:37 np0005537197 systemd[1]: session-27.scope: Deactivated successfully.
Nov 26 18:14:37 np0005537197 systemd[1]: session-27.scope: Consumed 1min 51.592s CPU time.
Nov 26 18:14:37 np0005537197 systemd-logind[819]: Removed session 27.
Nov 26 18:14:39 np0005537197 podman[235314]: 2025-11-26 23:14:39.805493181 +0000 UTC m=+0.093702758 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 18:14:42 np0005537197 systemd-logind[819]: New session 28 of user zuul.
Nov 26 18:14:42 np0005537197 systemd[1]: Started Session 28 of User zuul.
Nov 26 18:14:44 np0005537197 python3.9[235485]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 18:14:46 np0005537197 python3.9[235641]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 26 18:14:47 np0005537197 python3.9[235794]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 26 18:14:48 np0005537197 podman[235850]: 2025-11-26 23:14:48.205383381 +0000 UTC m=+0.119734729 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 18:14:48 np0005537197 python3.9[235897]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 26 18:14:50 np0005537197 podman[235912]: 2025-11-26 23:14:50.801531259 +0000 UTC m=+0.085507586 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 18:14:50 np0005537197 podman[235914]: 2025-11-26 23:14:50.832456557 +0000 UTC m=+0.095682780 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, build-date=2025-08-20T13:12:41)
Nov 26 18:14:50 np0005537197 podman[235913]: 2025-11-26 23:14:50.848307395 +0000 UTC m=+0.132241641 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm)
Nov 26 18:14:51 np0005537197 python3.9[236118]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:52 np0005537197 podman[236213]: 2025-11-26 23:14:52.789134584 +0000 UTC m=+0.121969457 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Nov 26 18:14:52 np0005537197 podman[236214]: 2025-11-26 23:14:52.813476042 +0000 UTC m=+0.128192077 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 18:14:52 np0005537197 python3.9[236281]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198890.9628599-54-249774549334624/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:54 np0005537197 python3.9[236434]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 18:14:55 np0005537197 python3.9[236586]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 26 18:14:56 np0005537197 python3.9[236709]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764198894.6986558-77-16309408603438/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 26 23:14:57 compute-0 python3.9[236861]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 26 23:14:57 compute-0 systemd[1]: Stopping System Logging Service...
Nov 26 23:14:57 compute-0 rsyslogd[1005]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1005" x-info="https://www.rsyslog.com"] exiting on signal 15.
Nov 26 23:14:57 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Nov 26 23:14:57 compute-0 systemd[1]: Stopped System Logging Service.
Nov 26 23:14:57 compute-0 systemd[1]: rsyslog.service: Consumed 5.457s CPU time, 8.2M memory peak, read 0B from disk, written 7.0M to disk.
Nov 26 23:14:57 compute-0 systemd[1]: Starting System Logging Service...
Nov 26 23:14:58 compute-0 rsyslogd[236865]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236865" x-info="https://www.rsyslog.com"] start
Nov 26 23:14:58 compute-0 systemd[1]: Started System Logging Service.
Nov 26 23:14:58 compute-0 rsyslogd[236865]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 23:14:58 compute-0 rsyslogd[236865]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Nov 26 23:14:58 compute-0 rsyslogd[236865]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Nov 26 23:14:58 compute-0 rsyslogd[236865]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Nov 26 23:14:58 compute-0 rsyslogd[236865]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Nov 26 23:14:58 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 26 23:14:58 compute-0 systemd[1]: session-28.scope: Consumed 12.399s CPU time.
Nov 26 23:14:58 compute-0 systemd-logind[819]: Session 28 logged out. Waiting for processes to exit.
Nov 26 23:14:58 compute-0 systemd-logind[819]: Removed session 28.
Nov 26 23:14:59 compute-0 podman[203621]: time="2025-11-26T23:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:14:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:14:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4282 "" "Go-http-client/1.1"
Nov 26 23:15:01 compute-0 openstack_network_exporter[205787]: ERROR   23:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:15:01 compute-0 openstack_network_exporter[205787]: ERROR   23:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:15:01 compute-0 openstack_network_exporter[205787]: ERROR   23:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:15:01 compute-0 openstack_network_exporter[205787]: ERROR   23:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:15:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:15:01 compute-0 openstack_network_exporter[205787]: ERROR   23:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:15:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:15:02 compute-0 podman[236894]: 2025-11-26 23:15:02.833294685 +0000 UTC m=+0.118710373 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:15:05 compute-0 podman[236912]: 2025-11-26 23:15:05.794766893 +0000 UTC m=+0.085788253 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:15:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:15:09.617 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:15:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:15:09.618 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:15:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:15:09.618 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:15:10 compute-0 podman[236935]: 2025-11-26 23:15:10.844031719 +0000 UTC m=+0.123291280 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.163 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.165 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.591 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.593 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5725MB free_disk=72.4379653930664GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.594 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.595 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.682 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.683 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.728 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.747 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.750 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:15:18 compute-0 nova_compute[189387]: 2025-11-26 23:15:18.750 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:15:18 compute-0 podman[236954]: 2025-11-26 23:15:18.899623383 +0000 UTC m=+0.187354383 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 26 23:15:19 compute-0 nova_compute[189387]: 2025-11-26 23:15:19.751 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:19 compute-0 nova_compute[189387]: 2025-11-26 23:15:19.752 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:15:19 compute-0 nova_compute[189387]: 2025-11-26 23:15:19.752 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:15:19 compute-0 nova_compute[189387]: 2025-11-26 23:15:19.905 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:15:20 compute-0 nova_compute[189387]: 2025-11-26 23:15:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:20 compute-0 nova_compute[189387]: 2025-11-26 23:15:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:20 compute-0 nova_compute[189387]: 2025-11-26 23:15:20.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:15:21 compute-0 nova_compute[189387]: 2025-11-26 23:15:21.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:21 compute-0 nova_compute[189387]: 2025-11-26 23:15:21.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:21 compute-0 podman[236981]: 2025-11-26 23:15:21.839153045 +0000 UTC m=+0.108854419 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, config_id=edpm)
Nov 26 23:15:21 compute-0 podman[236980]: 2025-11-26 23:15:21.846546045 +0000 UTC m=+0.125585239 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 26 23:15:21 compute-0 podman[236982]: 2025-11-26 23:15:21.84595958 +0000 UTC m=+0.113619461 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, vcs-type=git, config_id=edpm, maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal)
Nov 26 23:15:22 compute-0 nova_compute[189387]: 2025-11-26 23:15:22.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:22 compute-0 nova_compute[189387]: 2025-11-26 23:15:22.141 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:23 compute-0 nova_compute[189387]: 2025-11-26 23:15:23.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:15:23 compute-0 podman[237033]: 2025-11-26 23:15:23.587390187 +0000 UTC m=+0.130581959 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4)
Nov 26 23:15:23 compute-0 podman[237034]: 2025-11-26 23:15:23.592360535 +0000 UTC m=+0.121784812 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:15:29 compute-0 podman[203621]: time="2025-11-26T23:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:15:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:15:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4276 "" "Go-http-client/1.1"
Nov 26 23:15:31 compute-0 openstack_network_exporter[205787]: ERROR   23:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:15:31 compute-0 openstack_network_exporter[205787]: ERROR   23:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:15:31 compute-0 openstack_network_exporter[205787]: ERROR   23:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:15:31 compute-0 openstack_network_exporter[205787]: ERROR   23:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:15:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:15:31 compute-0 openstack_network_exporter[205787]: ERROR   23:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:15:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:15:33 compute-0 podman[237076]: 2025-11-26 23:15:33.836935213 +0000 UTC m=+0.122307747 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 26 23:15:36 compute-0 podman[237096]: 2025-11-26 23:15:36.833516224 +0000 UTC m=+0.120015856 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:15:40 compute-0 systemd-logind[819]: New session 29 of user zuul.
Nov 26 23:15:40 compute-0 systemd[1]: Started Session 29 of User zuul.
Nov 26 23:15:41 compute-0 podman[237270]: 2025-11-26 23:15:41.752018827 +0000 UTC m=+0.133479421 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527)
Nov 26 23:15:41 compute-0 python3[237310]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 23:15:44 compute-0 python3[237536]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:15:45 compute-0 python3[237689]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:15:48 compute-0 python3[237840]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 26 23:15:49 compute-0 podman[237937]: 2025-11-26 23:15:49.91873539 +0000 UTC m=+0.197883066 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 26 23:15:50 compute-0 python3[238018]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 26 23:15:52 compute-0 podman[238226]: 2025-11-26 23:15:52.744778216 +0000 UTC m=+0.094198748 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Nov 26 23:15:52 compute-0 podman[238223]: 2025-11-26 23:15:52.750123336 +0000 UTC m=+0.096324654 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 23:15:52 compute-0 podman[238217]: 2025-11-26 23:15:52.766580019 +0000 UTC m=+0.117863060 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:15:52 compute-0 python3[238283]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:15:53 compute-0 podman[238435]: 2025-11-26 23:15:53.837701757 +0000 UTC m=+0.117951313 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:15:53 compute-0 podman[238433]: 2025-11-26 23:15:53.856366257 +0000 UTC m=+0.137878226 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.expose-services=, managed_by=edpm_ansible, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64, version=9.4, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container)
Nov 26 23:15:54 compute-0 python3[238501]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:15:59 compute-0 podman[203621]: time="2025-11-26T23:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:15:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:15:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4278 "" "Go-http-client/1.1"
Nov 26 23:16:01 compute-0 openstack_network_exporter[205787]: ERROR   23:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:16:01 compute-0 openstack_network_exporter[205787]: ERROR   23:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:16:01 compute-0 openstack_network_exporter[205787]: ERROR   23:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:16:01 compute-0 openstack_network_exporter[205787]: ERROR   23:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:16:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:16:01 compute-0 openstack_network_exporter[205787]: ERROR   23:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:16:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:16:04 compute-0 podman[238540]: 2025-11-26 23:16:04.859697983 +0000 UTC m=+0.144838820 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:16:07 compute-0 podman[238560]: 2025-11-26 23:16:07.833455445 +0000 UTC m=+0.118155139 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:16:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:16:09.618 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:16:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:16:09.618 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:16:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:16:09.618 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:16:12 compute-0 podman[238583]: 2025-11-26 23:16:12.853336633 +0000 UTC m=+0.132168796 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.166 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.684 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.685 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5721MB free_disk=72.4377326965332GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.685 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.686 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.765 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.765 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.792 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.809 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.811 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:16:18 compute-0 nova_compute[189387]: 2025-11-26 23:16:18.811 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:16:19 compute-0 nova_compute[189387]: 2025-11-26 23:16:19.811 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:19 compute-0 nova_compute[189387]: 2025-11-26 23:16:19.812 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:16:19 compute-0 nova_compute[189387]: 2025-11-26 23:16:19.813 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:16:19 compute-0 nova_compute[189387]: 2025-11-26 23:16:19.827 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:16:20 compute-0 nova_compute[189387]: 2025-11-26 23:16:20.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:20 compute-0 nova_compute[189387]: 2025-11-26 23:16:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:20 compute-0 nova_compute[189387]: 2025-11-26 23:16:20.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:16:20 compute-0 podman[238603]: 2025-11-26 23:16:20.820949961 +0000 UTC m=+0.117967374 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:16:21 compute-0 nova_compute[189387]: 2025-11-26 23:16:21.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:22 compute-0 nova_compute[189387]: 2025-11-26 23:16:22.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:22 compute-0 nova_compute[189387]: 2025-11-26 23:16:22.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:23 compute-0 nova_compute[189387]: 2025-11-26 23:16:23.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:23 compute-0 podman[238629]: 2025-11-26 23:16:23.780062516 +0000 UTC m=+0.068129213 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:16:23 compute-0 podman[238630]: 2025-11-26 23:16:23.848195888 +0000 UTC m=+0.115020656 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 26 23:16:23 compute-0 podman[238636]: 2025-11-26 23:16:23.8486978 +0000 UTC m=+0.110649700 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Nov 26 23:16:23 compute-0 podman[238684]: 2025-11-26 23:16:23.987640304 +0000 UTC m=+0.080921648 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:16:24 compute-0 podman[238686]: 2025-11-26 23:16:24.039610331 +0000 UTC m=+0.124619009 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vcs-type=git, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 23:16:24 compute-0 nova_compute[189387]: 2025-11-26 23:16:24.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:16:29 compute-0 podman[203621]: time="2025-11-26T23:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:16:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:16:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4274 "" "Go-http-client/1.1"
Nov 26 23:16:31 compute-0 openstack_network_exporter[205787]: ERROR   23:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:16:31 compute-0 openstack_network_exporter[205787]: ERROR   23:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:16:31 compute-0 openstack_network_exporter[205787]: ERROR   23:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:16:31 compute-0 openstack_network_exporter[205787]: ERROR   23:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:16:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:16:31 compute-0 openstack_network_exporter[205787]: ERROR   23:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:16:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:16:35 compute-0 podman[238729]: 2025-11-26 23:16:35.836031764 +0000 UTC m=+0.121388464 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.839 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.839 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:16:36.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:16:38 compute-0 podman[238749]: 2025-11-26 23:16:38.890826465 +0000 UTC m=+0.091993039 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:16:43 compute-0 podman[238770]: 2025-11-26 23:16:43.85276884 +0000 UTC m=+0.136909671 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 26 23:16:51 compute-0 podman[238790]: 2025-11-26 23:16:51.837022585 +0000 UTC m=+0.130015060 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 23:16:53 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 26 23:16:53 compute-0 systemd[1]: session-29.scope: Consumed 11.355s CPU time.
Nov 26 23:16:53 compute-0 systemd-logind[819]: Session 29 logged out. Waiting for processes to exit.
Nov 26 23:16:53 compute-0 systemd-logind[819]: Removed session 29.
Nov 26 23:16:54 compute-0 podman[238817]: 2025-11-26 23:16:54.84366525 +0000 UTC m=+0.122131002 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:16:54 compute-0 podman[238818]: 2025-11-26 23:16:54.846116845 +0000 UTC m=+0.118370124 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:16:54 compute-0 podman[238827]: 2025-11-26 23:16:54.859022544 +0000 UTC m=+0.114151532 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc.)
Nov 26 23:16:54 compute-0 podman[238816]: 2025-11-26 23:16:54.883672753 +0000 UTC m=+0.156138478 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:16:54 compute-0 podman[238815]: 2025-11-26 23:16:54.903815102 +0000 UTC m=+0.181944646 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Nov 26 23:16:59 compute-0 podman[203621]: time="2025-11-26T23:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:16:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:16:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4285 "" "Go-http-client/1.1"
Nov 26 23:17:01 compute-0 openstack_network_exporter[205787]: ERROR   23:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:17:01 compute-0 openstack_network_exporter[205787]: ERROR   23:17:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:17:01 compute-0 openstack_network_exporter[205787]: ERROR   23:17:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:17:01 compute-0 openstack_network_exporter[205787]: ERROR   23:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:17:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:17:01 compute-0 openstack_network_exporter[205787]: ERROR   23:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:17:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:17:06 compute-0 podman[238905]: 2025-11-26 23:17:06.817056976 +0000 UTC m=+0.096274213 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:17:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:17:09.618 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:17:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:17:09.618 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:17:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:17:09.619 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:17:09 compute-0 podman[238924]: 2025-11-26 23:17:09.823956999 +0000 UTC m=+0.112135900 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:17:14 compute-0 podman[238947]: 2025-11-26 23:17:14.828572246 +0000 UTC m=+0.127424831 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.162 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.163 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.163 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.163 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.723 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.725 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5711MB free_disk=72.4377326965332GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.726 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.726 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.817 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.818 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.857 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.885 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.888 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:17:20 compute-0 nova_compute[189387]: 2025-11-26 23:17:20.889 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:17:21 compute-0 nova_compute[189387]: 2025-11-26 23:17:21.889 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:21 compute-0 nova_compute[189387]: 2025-11-26 23:17:21.890 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:17:21 compute-0 nova_compute[189387]: 2025-11-26 23:17:21.891 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:17:21 compute-0 nova_compute[189387]: 2025-11-26 23:17:21.909 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:17:21 compute-0 nova_compute[189387]: 2025-11-26 23:17:21.910 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:21 compute-0 nova_compute[189387]: 2025-11-26 23:17:21.910 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:17:22 compute-0 nova_compute[189387]: 2025-11-26 23:17:22.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:22 compute-0 nova_compute[189387]: 2025-11-26 23:17:22.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:22 compute-0 podman[238967]: 2025-11-26 23:17:22.900414603 +0000 UTC m=+0.187425440 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 26 23:17:23 compute-0 nova_compute[189387]: 2025-11-26 23:17:23.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:24 compute-0 nova_compute[189387]: 2025-11-26 23:17:24.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:25 compute-0 nova_compute[189387]: 2025-11-26 23:17:25.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:25 compute-0 podman[238998]: 2025-11-26 23:17:25.829767807 +0000 UTC m=+0.102494637 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 26 23:17:25 compute-0 podman[239004]: 2025-11-26 23:17:25.837572351 +0000 UTC m=+0.090065799 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git)
Nov 26 23:17:25 compute-0 podman[238997]: 2025-11-26 23:17:25.845465009 +0000 UTC m=+0.125779069 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:17:25 compute-0 podman[238996]: 2025-11-26 23:17:25.854744784 +0000 UTC m=+0.132331702 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:17:25 compute-0 podman[238995]: 2025-11-26 23:17:25.869216594 +0000 UTC m=+0.153074056 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Nov 26 23:17:27 compute-0 nova_compute[189387]: 2025-11-26 23:17:27.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:17:29 compute-0 podman[203621]: time="2025-11-26T23:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:17:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:17:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4292 "" "Go-http-client/1.1"
Nov 26 23:17:31 compute-0 openstack_network_exporter[205787]: ERROR   23:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:17:31 compute-0 openstack_network_exporter[205787]: ERROR   23:17:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:17:31 compute-0 openstack_network_exporter[205787]: ERROR   23:17:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:17:31 compute-0 openstack_network_exporter[205787]: ERROR   23:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:17:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:17:31 compute-0 openstack_network_exporter[205787]: ERROR   23:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:17:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:17:38 compute-0 podman[239090]: 2025-11-26 23:17:38.097966527 +0000 UTC m=+0.380358576 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 23:17:40 compute-0 podman[239110]: 2025-11-26 23:17:40.821847817 +0000 UTC m=+0.106632624 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:17:45 compute-0 podman[239133]: 2025-11-26 23:17:45.827033893 +0000 UTC m=+0.112126529 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 23:17:53 compute-0 podman[239152]: 2025-11-26 23:17:53.876589364 +0000 UTC m=+0.172530252 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:17:56 compute-0 podman[239181]: 2025-11-26 23:17:56.834297154 +0000 UTC m=+0.103755649 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:17:56 compute-0 podman[239194]: 2025-11-26 23:17:56.840358634 +0000 UTC m=+0.101129639 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible)
Nov 26 23:17:56 compute-0 podman[239186]: 2025-11-26 23:17:56.842322425 +0000 UTC m=+0.105284578 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:17:56 compute-0 podman[239180]: 2025-11-26 23:17:56.845863169 +0000 UTC m=+0.137646052 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:17:56 compute-0 podman[239179]: 2025-11-26 23:17:56.84741169 +0000 UTC m=+0.134176551 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc.)
Nov 26 23:17:59 compute-0 podman[203621]: time="2025-11-26T23:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:17:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:17:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4289 "" "Go-http-client/1.1"
Nov 26 23:18:01 compute-0 openstack_network_exporter[205787]: ERROR   23:18:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:18:01 compute-0 openstack_network_exporter[205787]: ERROR   23:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:18:01 compute-0 openstack_network_exporter[205787]: ERROR   23:18:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:18:01 compute-0 openstack_network_exporter[205787]: ERROR   23:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:18:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:18:01 compute-0 openstack_network_exporter[205787]: ERROR   23:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:18:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:18:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:02.450 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:18:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:02.451 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:18:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:02.453 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:18:08 compute-0 podman[239275]: 2025-11-26 23:18:08.830032942 +0000 UTC m=+0.115352824 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:18:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:09.619 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:18:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:09.619 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:18:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:09.619 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:18:11 compute-0 podman[239294]: 2025-11-26 23:18:11.810725418 +0000 UTC m=+0.101838208 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:18:16 compute-0 podman[239318]: 2025-11-26 23:18:16.818426309 +0000 UTC m=+0.103763848 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.149 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.150 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.150 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.190 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.190 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.191 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.192 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.673 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.674 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.43782424926758GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.675 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.675 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.729 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.730 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.756 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.773 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.775 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:18:21 compute-0 nova_compute[189387]: 2025-11-26 23:18:21.775 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:18:23 compute-0 nova_compute[189387]: 2025-11-26 23:18:23.749 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:23 compute-0 nova_compute[189387]: 2025-11-26 23:18:23.750 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:23 compute-0 nova_compute[189387]: 2025-11-26 23:18:23.751 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:18:24 compute-0 nova_compute[189387]: 2025-11-26 23:18:24.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:24 compute-0 nova_compute[189387]: 2025-11-26 23:18:24.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:24 compute-0 podman[239339]: 2025-11-26 23:18:24.878497127 +0000 UTC m=+0.165226790 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 23:18:25 compute-0 nova_compute[189387]: 2025-11-26 23:18:25.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:26 compute-0 nova_compute[189387]: 2025-11-26 23:18:26.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:18:27 compute-0 podman[239366]: 2025-11-26 23:18:27.836167294 +0000 UTC m=+0.101740534 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:18:27 compute-0 podman[239365]: 2025-11-26 23:18:27.855055653 +0000 UTC m=+0.129782555 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, managed_by=edpm_ansible)
Nov 26 23:18:27 compute-0 podman[239368]: 2025-11-26 23:18:27.859935191 +0000 UTC m=+0.111715747 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm)
Nov 26 23:18:27 compute-0 podman[239367]: 2025-11-26 23:18:27.880118844 +0000 UTC m=+0.149200227 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:18:27 compute-0 podman[239374]: 2025-11-26 23:18:27.89853022 +0000 UTC m=+0.146150726 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public)
Nov 26 23:18:29 compute-0 podman[203621]: time="2025-11-26T23:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:18:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:18:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4292 "" "Go-http-client/1.1"
Nov 26 23:18:31 compute-0 openstack_network_exporter[205787]: ERROR   23:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:18:31 compute-0 openstack_network_exporter[205787]: ERROR   23:18:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:18:31 compute-0 openstack_network_exporter[205787]: ERROR   23:18:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:18:31 compute-0 openstack_network_exporter[205787]: ERROR   23:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:18:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:18:31 compute-0 openstack_network_exporter[205787]: ERROR   23:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:18:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.840 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.840 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.840 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.841 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.842 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.842 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.843 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.846 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.847 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.848 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.853 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:18:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:18:39 compute-0 podman[239469]: 2025-11-26 23:18:39.838570841 +0000 UTC m=+0.124088455 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:18:42 compute-0 podman[239489]: 2025-11-26 23:18:42.826432646 +0000 UTC m=+0.113816784 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:18:47 compute-0 podman[239512]: 2025-11-26 23:18:47.824963376 +0000 UTC m=+0.115175601 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Nov 26 23:18:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:48.485 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:18:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:48.486 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:18:50 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:18:50.489 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:18:55 compute-0 podman[239532]: 2025-11-26 23:18:55.906666185 +0000 UTC m=+0.189528321 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 23:18:58 compute-0 podman[239558]: 2025-11-26 23:18:58.834192118 +0000 UTC m=+0.108146515 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible)
Nov 26 23:18:58 compute-0 podman[239559]: 2025-11-26 23:18:58.835817241 +0000 UTC m=+0.105997918 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:18:58 compute-0 podman[239562]: 2025-11-26 23:18:58.846897563 +0000 UTC m=+0.112731005 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=)
Nov 26 23:18:58 compute-0 podman[239561]: 2025-11-26 23:18:58.859939057 +0000 UTC m=+0.127019792 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 23:18:58 compute-0 podman[239560]: 2025-11-26 23:18:58.86270475 +0000 UTC m=+0.123528420 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:18:59 compute-0 podman[203621]: time="2025-11-26T23:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:18:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:18:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4284 "" "Go-http-client/1.1"
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.005 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.005 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.036 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.175 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.176 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.186 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.187 189391 INFO nova.compute.claims [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.303 189391 DEBUG nova.compute.provider_tree [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.317 189391 DEBUG nova.scheduler.client.report [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.351 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.352 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.395 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.396 189391 DEBUG nova.network.neutron [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.421 189391 INFO nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.468 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.561 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.564 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.565 189391 INFO nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Creating image(s)#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.567 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.568 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.570 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.571 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "88820ed9476b98465b4ed33781797613b42e7ead" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:00 compute-0 nova_compute[189387]: 2025-11-26 23:19:00.573 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:01 compute-0 openstack_network_exporter[205787]: ERROR   23:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:19:01 compute-0 openstack_network_exporter[205787]: ERROR   23:19:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:19:01 compute-0 openstack_network_exporter[205787]: ERROR   23:19:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:19:01 compute-0 openstack_network_exporter[205787]: ERROR   23:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:19:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:19:01 compute-0 openstack_network_exporter[205787]: ERROR   23:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:19:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:19:01 compute-0 nova_compute[189387]: 2025-11-26 23:19:01.657 189391 WARNING oslo_policy.policy [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 26 23:19:01 compute-0 nova_compute[189387]: 2025-11-26 23:19:01.659 189391 WARNING oslo_policy.policy [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.474 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.572 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.part --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.574 189391 DEBUG nova.virt.images [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] 422f324f-e13a-4c74-ba29-023e791ed636 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.575 189391 DEBUG nova.privsep.utils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.576 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.part /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.820 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.part /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.converted" returned: 0 in 0.244s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.827 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.907 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead.converted --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.909 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:02 compute-0 nova_compute[189387]: 2025-11-26 23:19:02.937 189391 INFO oslo.privsep.daemon [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpjx2dqnkv/privsep.sock']#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.090 189391 DEBUG nova.network.neutron [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Successfully created port: 3109b207-2fdd-46a4-8789-08fff2b3f916 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.669 189391 INFO oslo.privsep.daemon [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.544 239672 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.551 239672 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.555 239672 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.555 239672 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239672#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.764 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.838 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.839 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "88820ed9476b98465b4ed33781797613b42e7ead" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.840 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.855 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.945 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.946 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.998 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:03 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.999 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:03.999 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.067 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.068 189391 DEBUG nova.virt.disk.api [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Checking if we can resize image /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.068 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.163 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.164 189391 DEBUG nova.virt.disk.api [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Cannot resize image /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.165 189391 DEBUG nova.objects.instance [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'migration_context' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.188 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.189 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.190 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.191 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.191 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.192 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.233 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.234 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.256 189391 DEBUG nova.network.neutron [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Successfully updated port: 3109b207-2fdd-46a4-8789-08fff2b3f916 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.273 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.273 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.273 189391 DEBUG nova.network.neutron [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.292 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.293 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.306 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.398 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.400 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.401 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.426 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.514 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.516 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.545 189391 DEBUG nova.network.neutron [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.586 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.587 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.588 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.665 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.667 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.668 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Ensure instance console log exists: /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.669 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.670 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.671 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.789 189391 DEBUG nova.compute.manager [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-changed-3109b207-2fdd-46a4-8789-08fff2b3f916 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.790 189391 DEBUG nova.compute.manager [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Refreshing instance network info cache due to event network-changed-3109b207-2fdd-46a4-8789-08fff2b3f916. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:19:04 compute-0 nova_compute[189387]: 2025-11-26 23:19:04.791 189391 DEBUG oslo_concurrency.lockutils [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.453 189391 DEBUG nova.network.neutron [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.485 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.486 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Instance network_info: |[{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.487 189391 DEBUG oslo_concurrency.lockutils [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.488 189391 DEBUG nova.network.neutron [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Refreshing network info cache for port 3109b207-2fdd-46a4-8789-08fff2b3f916 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.494 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Start _get_guest_xml network_info=[{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '422f324f-e13a-4c74-ba29-023e791ed636'}], 'ephemerals': [{'size': 1, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.508 189391 WARNING nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.527 189391 DEBUG nova.virt.libvirt.host [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.530 189391 DEBUG nova.virt.libvirt.host [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.537 189391 DEBUG nova.virt.libvirt.host [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.538 189391 DEBUG nova.virt.libvirt.host [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.539 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.540 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:17:57Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='abcd883d-a9af-4dee-93ae-b5623bc853b6',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.540 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.541 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.541 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.541 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.542 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.542 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.542 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.542 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.543 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.543 189391 DEBUG nova.virt.hardware [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.547 189391 DEBUG nova.privsep.utils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.548 189391 DEBUG nova.virt.libvirt.vif [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-1pai8j0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:19:00Z,user_data=None,user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=3214d9e6-3c61-49f0-a353-01201a6aa6db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.548 189391 DEBUG nova.network.os_vif_util [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.549 189391 DEBUG nova.network.os_vif_util [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c7:ca,bridge_name='br-int',has_traffic_filtering=True,id=3109b207-2fdd-46a4-8789-08fff2b3f916,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3109b207-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.551 189391 DEBUG nova.objects.instance [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.572 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <uuid>3214d9e6-3c61-49f0-a353-01201a6aa6db</uuid>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <name>instance-00000001</name>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <memory>524288</memory>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <nova:name>test_0</nova:name>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:19:05</nova:creationTime>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <nova:flavor name="m1.small">
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:memory>512</nova:memory>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:user uuid="6ad061874c77438db2e6d8efb2b1400b">admin</nova:user>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:project uuid="dd2e793599b6418881c391df7f71e0c6">admin</nova:project>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="422f324f-e13a-4c74-ba29-023e791ed636"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        <nova:port uuid="3109b207-2fdd-46a4-8789-08fff2b3f916">
Nov 26 23:19:05 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="192.168.0.4" ipVersion="4"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <system>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <entry name="serial">3214d9e6-3c61-49f0-a353-01201a6aa6db</entry>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <entry name="uuid">3214d9e6-3c61-49f0-a353-01201a6aa6db</entry>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </system>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <os>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  </os>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <features>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  </features>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <target dev="vdb" bus="virtio"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.config"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:bf:c7:ca"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <target dev="tap3109b207-2f"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/console.log" append="off"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <video>
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </video>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:19:05 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:19:05 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:19:05 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:19:05 compute-0 nova_compute[189387]: </domain>
Nov 26 23:19:05 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.574 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Preparing to wait for external event network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.575 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.575 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.576 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.576 189391 DEBUG nova.virt.libvirt.vif [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-1pai8j0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:19:00Z,user_data=None,user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=3214d9e6-3c61-49f0-a353-01201a6aa6db,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.577 189391 DEBUG nova.network.os_vif_util [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.577 189391 DEBUG nova.network.os_vif_util [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c7:ca,bridge_name='br-int',has_traffic_filtering=True,id=3109b207-2fdd-46a4-8789-08fff2b3f916,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3109b207-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.578 189391 DEBUG os_vif [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c7:ca,bridge_name='br-int',has_traffic_filtering=True,id=3109b207-2fdd-46a4-8789-08fff2b3f916,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3109b207-2f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.640 189391 DEBUG ovsdbapp.backend.ovs_idl [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.641 189391 DEBUG ovsdbapp.backend.ovs_idl [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.642 189391 DEBUG ovsdbapp.backend.ovs_idl [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.642 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.643 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.644 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.645 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.647 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.650 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.661 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.662 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.662 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:19:05 compute-0 nova_compute[189387]: 2025-11-26 23:19:05.663 189391 INFO oslo.privsep.daemon [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmppiiml6hz/privsep.sock']#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.400 189391 INFO oslo.privsep.daemon [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.253 239709 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.260 239709 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.264 239709 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.264 239709 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239709#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.780 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.781 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3109b207-2f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.781 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3109b207-2f, col_values=(('external_ids', {'iface-id': '3109b207-2fdd-46a4-8789-08fff2b3f916', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:c7:ca', 'vm-uuid': '3214d9e6-3c61-49f0-a353-01201a6aa6db'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:19:06 compute-0 NetworkManager[56227]: <info>  [1764199146.7855] manager: (tap3109b207-2f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.786 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.788 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.796 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.797 189391 INFO os_vif [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c7:ca,bridge_name='br-int',has_traffic_filtering=True,id=3109b207-2fdd-46a4-8789-08fff2b3f916,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3109b207-2f')#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.989 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.990 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.990 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.990 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No VIF found with MAC fa:16:3e:bf:c7:ca, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:19:06 compute-0 nova_compute[189387]: 2025-11-26 23:19:06.991 189391 INFO nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Using config drive#033[00m
Nov 26 23:19:07 compute-0 nova_compute[189387]: 2025-11-26 23:19:07.990 189391 INFO nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Creating config drive at /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.config#033[00m
Nov 26 23:19:07 compute-0 nova_compute[189387]: 2025-11-26 23:19:07.994 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1domuz80 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.131 189391 DEBUG oslo_concurrency.processutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1domuz80" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:08 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 26 23:19:08 compute-0 kernel: tap3109b207-2f: entered promiscuous mode
Nov 26 23:19:08 compute-0 NetworkManager[56227]: <info>  [1764199148.2343] manager: (tap3109b207-2f): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Nov 26 23:19:08 compute-0 ovn_controller[97697]: 2025-11-26T23:19:08Z|00027|binding|INFO|Claiming lport 3109b207-2fdd-46a4-8789-08fff2b3f916 for this chassis.
Nov 26 23:19:08 compute-0 ovn_controller[97697]: 2025-11-26T23:19:08Z|00028|binding|INFO|3109b207-2fdd-46a4-8789-08fff2b3f916: Claiming fa:16:3e:bf:c7:ca 192.168.0.4
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.239 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.246 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.260 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:c7:ca 192.168.0.4'], port_security=['fa:16:3e:bf:c7:ca 192.168.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.4/24', 'neutron:device_id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=3109b207-2fdd-46a4-8789-08fff2b3f916) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.261 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 3109b207-2fdd-46a4-8789-08fff2b3f916 in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 bound to our chassis#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.263 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.265 106595 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp3yxsbcfm/privsep.sock']#033[00m
Nov 26 23:19:08 compute-0 systemd-udevd[239738]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:19:08 compute-0 NetworkManager[56227]: <info>  [1764199148.3118] device (tap3109b207-2f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:19:08 compute-0 NetworkManager[56227]: <info>  [1764199148.3127] device (tap3109b207-2f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:19:08 compute-0 systemd-machined[155674]: New machine qemu-1-instance-00000001.
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.343 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:08 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 26 23:19:08 compute-0 ovn_controller[97697]: 2025-11-26T23:19:08Z|00029|binding|INFO|Setting lport 3109b207-2fdd-46a4-8789-08fff2b3f916 ovn-installed in OVS
Nov 26 23:19:08 compute-0 ovn_controller[97697]: 2025-11-26T23:19:08Z|00030|binding|INFO|Setting lport 3109b207-2fdd-46a4-8789-08fff2b3f916 up in Southbound
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.352 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.562 189391 DEBUG nova.network.neutron [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated VIF entry in instance network info cache for port 3109b207-2fdd-46a4-8789-08fff2b3f916. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.563 189391 DEBUG nova.network.neutron [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:19:08 compute-0 nova_compute[189387]: 2025-11-26 23:19:08.588 189391 DEBUG oslo_concurrency.lockutils [req-42af1c40-b0eb-4782-832a-ebba3cca8b51 req-cd7c3a46-5580-4b0d-bad1-debc5cdc1f82 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.925 106595 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.927 106595 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3yxsbcfm/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.811 239757 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.819 239757 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.823 239757 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.824 239757 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239757#033[00m
Nov 26 23:19:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:08.931 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[60123f8f-479d-4579-b343-1de964b30943]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.039 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199149.0377746, 3214d9e6-3c61-49f0-a353-01201a6aa6db => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.041 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] VM Started (Lifecycle Event)#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.084 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.092 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199149.0381567, 3214d9e6-3c61-49f0-a353-01201a6aa6db => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.093 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.101 189391 DEBUG nova.compute.manager [req-123b9759-f297-48cf-bd07-08156364e679 req-ec03aea3-e5ff-461a-adef-cbc0a6588310 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.102 189391 DEBUG oslo_concurrency.lockutils [req-123b9759-f297-48cf-bd07-08156364e679 req-ec03aea3-e5ff-461a-adef-cbc0a6588310 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.103 189391 DEBUG oslo_concurrency.lockutils [req-123b9759-f297-48cf-bd07-08156364e679 req-ec03aea3-e5ff-461a-adef-cbc0a6588310 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.104 189391 DEBUG oslo_concurrency.lockutils [req-123b9759-f297-48cf-bd07-08156364e679 req-ec03aea3-e5ff-461a-adef-cbc0a6588310 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.105 189391 DEBUG nova.compute.manager [req-123b9759-f297-48cf-bd07-08156364e679 req-ec03aea3-e5ff-461a-adef-cbc0a6588310 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Processing event network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.107 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.114 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.117 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.124 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199149.1146502, 3214d9e6-3c61-49f0-a353-01201a6aa6db => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.125 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.139 189391 INFO nova.virt.libvirt.driver [-] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Instance spawned successfully.#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.140 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.147 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.154 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:19:09 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.173 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.175 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.177 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.179 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.180 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.182 189391 DEBUG nova.virt.libvirt.driver [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.187 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:19:09 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.255 189391 INFO nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Took 8.69 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.256 189391 DEBUG nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.344 189391 INFO nova.compute.manager [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Took 9.20 seconds to build instance.#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.400 189391 DEBUG oslo_concurrency.lockutils [None req-fc689a81-9baa-4d59-9fed-b4464fdcf90d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.418 239757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.419 239757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.419 239757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.620 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.620 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.620 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:09 compute-0 nova_compute[189387]: 2025-11-26 23:19:09.687 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.962 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[01da1c41-01bb-452c-bad6-38ab2ee0d996]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.964 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap16c31f2c-51 in ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.966 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap16c31f2c-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.966 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[affb3341-f3d8-490d-b636-2224229f9b19]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:09.970 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[41c0c868-d147-4dbb-846f-db3535b21430]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.004 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[8673210d-4e38-42e1-ba33-9d8732b0f223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.039 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[05a5c554-c6f2-416c-a56f-0cbf6a776b5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.042 106595 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpqvqwoaa7/privsep.sock']#033[00m
Nov 26 23:19:10 compute-0 podman[239792]: 2025-11-26 23:19:10.147533785 +0000 UTC m=+0.114039290 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.702 106595 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.703 106595 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqvqwoaa7/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.587 239818 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.592 239818 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.594 239818 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.595 239818 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239818#033[00m
Nov 26 23:19:10 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:10.706 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[42dd2db4-94e0-4e94-bfe7-6888a9905f52]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.167 239818 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.167 239818 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.167 239818 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.267 189391 DEBUG nova.compute.manager [req-63154b8b-8476-4346-8d04-e728ddc5d96a req-7f4c6dbb-df2c-48ba-8979-6e7709fa3f83 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.269 189391 DEBUG oslo_concurrency.lockutils [req-63154b8b-8476-4346-8d04-e728ddc5d96a req-7f4c6dbb-df2c-48ba-8979-6e7709fa3f83 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.270 189391 DEBUG oslo_concurrency.lockutils [req-63154b8b-8476-4346-8d04-e728ddc5d96a req-7f4c6dbb-df2c-48ba-8979-6e7709fa3f83 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.272 189391 DEBUG oslo_concurrency.lockutils [req-63154b8b-8476-4346-8d04-e728ddc5d96a req-7f4c6dbb-df2c-48ba-8979-6e7709fa3f83 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.273 189391 DEBUG nova.compute.manager [req-63154b8b-8476-4346-8d04-e728ddc5d96a req-7f4c6dbb-df2c-48ba-8979-6e7709fa3f83 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] No waiting events found dispatching network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.274 189391 WARNING nova.compute.manager [req-63154b8b-8476-4346-8d04-e728ddc5d96a req-7f4c6dbb-df2c-48ba-8979-6e7709fa3f83 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received unexpected event network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.748 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[643f5f37-77df-432d-8d65-136acb05d417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.786 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:11 compute-0 NetworkManager[56227]: <info>  [1764199151.7903] manager: (tap16c31f2c-50): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.789 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c2598a86-4239-447e-ae3c-d132715ee67c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 systemd-udevd[239830]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.826 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bbed0c-5a66-4161-bcc7-201705148a74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.829 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[05c9f29e-fdb7-4f75-8e4b-ea8d99164d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 NetworkManager[56227]: <info>  [1764199151.8594] device (tap16c31f2c-50): carrier: link connected
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.863 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[ee310be7-5b7c-4570-88fc-5659d4aa9cd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.878 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[91b2e58d-22aa-4d07-85ba-f6360349947c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 15778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239848, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.890 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0e696b-42f4-4c8a-b17a-daa3b7ddefd3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef4:bced'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383451, 'tstamp': 383451}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239849, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.902 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ff0f6726-e9a8-4ff8-832c-37b6ce593831]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 15778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239850, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.930 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6c150f-28d1-4b7d-80b8-2a1df8b3cd1a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.980 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[84f896e4-a4c5-4934-8b11-e5cd16f58b77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.982 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.983 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.984 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c31f2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.986 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:11 compute-0 NetworkManager[56227]: <info>  [1764199151.9874] manager: (tap16c31f2c-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 26 23:19:11 compute-0 kernel: tap16c31f2c-50: entered promiscuous mode
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.992 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:11.994 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16c31f2c-50, col_values=(('external_ids', {'iface-id': 'fcca7a28-5262-4637-8ef9-d543dee768b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:19:11 compute-0 nova_compute[189387]: 2025-11-26 23:19:11.996 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:11 compute-0 ovn_controller[97697]: 2025-11-26T23:19:11Z|00031|binding|INFO|Releasing lport fcca7a28-5262-4637-8ef9-d543dee768b2 from this chassis (sb_readonly=0)
Nov 26 23:19:12 compute-0 nova_compute[189387]: 2025-11-26 23:19:12.023 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:12.025 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/16c31f2c-5dd2-49b9-b313-1ecd3b059554.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/16c31f2c-5dd2-49b9-b313-1ecd3b059554.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:19:12 compute-0 nova_compute[189387]: 2025-11-26 23:19:12.027 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:12.026 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[40242013-a71d-43ba-9e6c-2f2639a94a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:12.029 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-16c31f2c-5dd2-49b9-b313-1ecd3b059554
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/16c31f2c-5dd2-49b9-b313-1ecd3b059554.pid.haproxy
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID 16c31f2c-5dd2-49b9-b313-1ecd3b059554
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:19:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:19:12.033 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'env', 'PROCESS_TAG=haproxy-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/16c31f2c-5dd2-49b9-b313-1ecd3b059554.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:19:12 compute-0 podman[239884]: 2025-11-26 23:19:12.55202615 +0000 UTC m=+0.083189765 container create a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 26 23:19:12 compute-0 podman[239884]: 2025-11-26 23:19:12.49971739 +0000 UTC m=+0.030881025 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:19:12 compute-0 systemd[1]: Started libpod-conmon-a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a.scope.
Nov 26 23:19:12 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:19:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61e1ba994a8bc183260745e88cbd864581c7ee91b172595fef910c4a4f694f61/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:19:12 compute-0 podman[239884]: 2025-11-26 23:19:12.707652295 +0000 UTC m=+0.238815900 container init a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:19:12 compute-0 podman[239884]: 2025-11-26 23:19:12.727041207 +0000 UTC m=+0.258204812 container start a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 23:19:12 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [NOTICE]   (239904) : New worker (239906) forked
Nov 26 23:19:12 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [NOTICE]   (239904) : Loading success.
Nov 26 23:19:13 compute-0 podman[239915]: 2025-11-26 23:19:13.832712207 +0000 UTC m=+0.128667826 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:19:14 compute-0 nova_compute[189387]: 2025-11-26 23:19:14.690 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:16 compute-0 nova_compute[189387]: 2025-11-26 23:19:16.791 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:18 compute-0 podman[239939]: 2025-11-26 23:19:18.845168234 +0000 UTC m=+0.139283026 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Nov 26 23:19:18 compute-0 nova_compute[189387]: 2025-11-26 23:19:18.932 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:18 compute-0 nova_compute[189387]: 2025-11-26 23:19:18.953 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Triggering sync for uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 23:19:18 compute-0 nova_compute[189387]: 2025-11-26 23:19:18.954 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:18 compute-0 nova_compute[189387]: 2025-11-26 23:19:18.955 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:19 compute-0 nova_compute[189387]: 2025-11-26 23:19:19.001 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:19 compute-0 nova_compute[189387]: 2025-11-26 23:19:19.693 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.645 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.646 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.646 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.647 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:19:21 compute-0 nova_compute[189387]: 2025-11-26 23:19:21.796 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:23 compute-0 nova_compute[189387]: 2025-11-26 23:19:23.929 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.286 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.287 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.288 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.290 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.291 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.292 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.327 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.328 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.329 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.330 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.421 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.501 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.503 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.600 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.602 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.661 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.663 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.696 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:24 compute-0 nova_compute[189387]: 2025-11-26 23:19:24.758 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.317 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.320 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5298MB free_disk=72.40651321411133GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.321 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.322 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3707] manager: (patch-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.371 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3760] device (patch-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 23:19:25 compute-0 ovn_controller[97697]: 2025-11-26T23:19:25Z|00032|binding|INFO|Releasing lport fcca7a28-5262-4637-8ef9-d543dee768b2 from this chassis (sb_readonly=0)
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3911] manager: (patch-br-int-to-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3916] device (patch-br-int-to-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3930] manager: (patch-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3938] manager: (patch-br-int-to-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3946] device (patch-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 23:19:25 compute-0 NetworkManager[56227]: <info>  [1764199165.3949] device (patch-br-int-to-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 26 23:19:25 compute-0 ovn_controller[97697]: 2025-11-26T23:19:25Z|00033|binding|INFO|Releasing lport fcca7a28-5262-4637-8ef9-d543dee768b2 from this chassis (sb_readonly=0)
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.426 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.435 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.563 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.564 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.565 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.673 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.775 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.776 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.784 189391 DEBUG nova.compute.manager [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-changed-3109b207-2fdd-46a4-8789-08fff2b3f916 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.785 189391 DEBUG nova.compute.manager [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Refreshing instance network info cache due to event network-changed-3109b207-2fdd-46a4-8789-08fff2b3f916. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.786 189391 DEBUG oslo_concurrency.lockutils [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.786 189391 DEBUG oslo_concurrency.lockutils [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.787 189391 DEBUG nova.network.neutron [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Refreshing network info cache for port 3109b207-2fdd-46a4-8789-08fff2b3f916 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.799 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.823 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.888 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.931 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updated inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.932 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.932 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.971 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.972 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.973 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.974 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:19:25 compute-0 nova_compute[189387]: 2025-11-26 23:19:25.997 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.001 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.002 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.020 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.798 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:26 compute-0 podman[239974]: 2025-11-26 23:19:26.80016244 +0000 UTC m=+0.099704221 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.868 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.869 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.869 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:26 compute-0 nova_compute[189387]: 2025-11-26 23:19:26.869 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:28 compute-0 nova_compute[189387]: 2025-11-26 23:19:28.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:28 compute-0 nova_compute[189387]: 2025-11-26 23:19:28.703 189391 DEBUG nova.network.neutron [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated VIF entry in instance network info cache for port 3109b207-2fdd-46a4-8789-08fff2b3f916. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:19:28 compute-0 nova_compute[189387]: 2025-11-26 23:19:28.704 189391 DEBUG nova.network.neutron [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:19:28 compute-0 nova_compute[189387]: 2025-11-26 23:19:28.730 189391 DEBUG oslo_concurrency.lockutils [req-c4927fdc-b3e5-4bef-8062-cbb1e80b8fc3 req-f3f7e41b-e97a-4320-b24c-3700abccd55e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:19:29 compute-0 nova_compute[189387]: 2025-11-26 23:19:29.699 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:29 compute-0 podman[203621]: time="2025-11-26T23:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:19:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:19:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4774 "" "Go-http-client/1.1"
Nov 26 23:19:29 compute-0 podman[240006]: 2025-11-26 23:19:29.85565703 +0000 UTC m=+0.101898669 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:19:29 compute-0 podman[240016]: 2025-11-26 23:19:29.856770889 +0000 UTC m=+0.095460719 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git)
Nov 26 23:19:29 compute-0 podman[240001]: 2025-11-26 23:19:29.856937423 +0000 UTC m=+0.127701629 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, architecture=x86_64, container_name=kepler, release=1214.1726694543)
Nov 26 23:19:29 compute-0 podman[240002]: 2025-11-26 23:19:29.876425308 +0000 UTC m=+0.144006360 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:19:29 compute-0 podman[240003]: 2025-11-26 23:19:29.886824892 +0000 UTC m=+0.141037302 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Nov 26 23:19:31 compute-0 openstack_network_exporter[205787]: ERROR   23:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:19:31 compute-0 openstack_network_exporter[205787]: ERROR   23:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:19:31 compute-0 openstack_network_exporter[205787]: ERROR   23:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:19:31 compute-0 openstack_network_exporter[205787]: ERROR   23:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:19:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:19:31 compute-0 openstack_network_exporter[205787]: ERROR   23:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:19:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:19:31 compute-0 nova_compute[189387]: 2025-11-26 23:19:31.802 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:32 compute-0 nova_compute[189387]: 2025-11-26 23:19:32.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:19:34 compute-0 nova_compute[189387]: 2025-11-26 23:19:34.702 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:36 compute-0 nova_compute[189387]: 2025-11-26 23:19:36.806 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:39 compute-0 nova_compute[189387]: 2025-11-26 23:19:39.705 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:40 compute-0 podman[240100]: 2025-11-26 23:19:40.840372036 +0000 UTC m=+0.121718412 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 23:19:41 compute-0 nova_compute[189387]: 2025-11-26 23:19:41.810 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:42 compute-0 ovn_controller[97697]: 2025-11-26T23:19:42Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:c7:ca 192.168.0.4
Nov 26 23:19:42 compute-0 ovn_controller[97697]: 2025-11-26T23:19:42Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:c7:ca 192.168.0.4
Nov 26 23:19:44 compute-0 nova_compute[189387]: 2025-11-26 23:19:44.708 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:44 compute-0 podman[240133]: 2025-11-26 23:19:44.811288359 +0000 UTC m=+0.119065702 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:19:46 compute-0 nova_compute[189387]: 2025-11-26 23:19:46.815 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:49 compute-0 nova_compute[189387]: 2025-11-26 23:19:49.711 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:49 compute-0 podman[240157]: 2025-11-26 23:19:49.841310502 +0000 UTC m=+0.128269847 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:19:51 compute-0 nova_compute[189387]: 2025-11-26 23:19:51.822 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:54 compute-0 nova_compute[189387]: 2025-11-26 23:19:54.714 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:55 compute-0 ovn_controller[97697]: 2025-11-26T23:19:55Z|00034|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Nov 26 23:19:56 compute-0 nova_compute[189387]: 2025-11-26 23:19:56.827 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:57 compute-0 podman[240178]: 2025-11-26 23:19:57.886658725 +0000 UTC m=+0.169433644 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 23:19:59 compute-0 nova_compute[189387]: 2025-11-26 23:19:59.717 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:19:59 compute-0 podman[203621]: time="2025-11-26T23:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:19:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:19:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 26 23:20:00 compute-0 podman[240206]: 2025-11-26 23:20:00.823143147 +0000 UTC m=+0.098551566 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:20:00 compute-0 podman[240205]: 2025-11-26 23:20:00.839960033 +0000 UTC m=+0.125518976 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:20:00 compute-0 podman[240207]: 2025-11-26 23:20:00.847053676 +0000 UTC m=+0.130502094 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:20:00 compute-0 podman[240204]: 2025-11-26 23:20:00.852867637 +0000 UTC m=+0.136203132 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:20:00 compute-0 podman[240208]: 2025-11-26 23:20:00.866785938 +0000 UTC m=+0.138810640 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-type=git)
Nov 26 23:20:01 compute-0 openstack_network_exporter[205787]: ERROR   23:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:20:01 compute-0 openstack_network_exporter[205787]: ERROR   23:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:20:01 compute-0 openstack_network_exporter[205787]: ERROR   23:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:20:01 compute-0 openstack_network_exporter[205787]: ERROR   23:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:20:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:20:01 compute-0 openstack_network_exporter[205787]: ERROR   23:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:20:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:20:01 compute-0 nova_compute[189387]: 2025-11-26 23:20:01.831 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:04 compute-0 nova_compute[189387]: 2025-11-26 23:20:04.720 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:06 compute-0 nova_compute[189387]: 2025-11-26 23:20:06.836 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:09.621 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:09.623 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:09.624 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:09 compute-0 nova_compute[189387]: 2025-11-26 23:20:09.723 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:11 compute-0 podman[240303]: 2025-11-26 23:20:11.828534277 +0000 UTC m=+0.103329930 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:20:11 compute-0 nova_compute[189387]: 2025-11-26 23:20:11.839 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:14 compute-0 nova_compute[189387]: 2025-11-26 23:20:14.728 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:15.132 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:20:15 compute-0 nova_compute[189387]: 2025-11-26 23:20:15.132 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:15.134 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:20:15 compute-0 podman[240327]: 2025-11-26 23:20:15.818969261 +0000 UTC m=+0.104612934 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:20:16 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:16.139 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:20:16 compute-0 nova_compute[189387]: 2025-11-26 23:20:16.843 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:19 compute-0 nova_compute[189387]: 2025-11-26 23:20:19.730 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.550 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.551 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.574 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.691 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.692 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.705 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.706 189391 INFO nova.compute.claims [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:20:20 compute-0 podman[240353]: 2025-11-26 23:20:20.788993653 +0000 UTC m=+0.080372216 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.906 189391 DEBUG nova.compute.provider_tree [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.923 189391 DEBUG nova.scheduler.client.report [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.953 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:20 compute-0 nova_compute[189387]: 2025-11-26 23:20:20.954 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.027 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.028 189391 DEBUG nova.network.neutron [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.049 189391 INFO nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.106 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.207 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.210 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.211 189391 INFO nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Creating image(s)#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.212 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.213 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.214 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.241 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.322 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.323 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "88820ed9476b98465b4ed33781797613b42e7ead" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.325 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.349 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.434 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.436 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.503 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk 1073741824" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.505 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.506 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.563 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.565 189391 DEBUG nova.virt.disk.api [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Checking if we can resize image /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.566 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.627 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.629 189391 DEBUG nova.virt.disk.api [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Cannot resize image /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.630 189391 DEBUG nova.objects.instance [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.651 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.652 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.654 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.682 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.742 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.744 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.745 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.771 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.848 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.861 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.863 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.930 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 1073741824" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.932 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.933 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.994 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.997 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.998 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Ensure instance console log exists: /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:20:21 compute-0 nova_compute[189387]: 2025-11-26 23:20:21.999 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.000 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.001 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.154 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.665 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.666 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.667 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:20:22 compute-0 nova_compute[189387]: 2025-11-26 23:20:22.667 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.109 189391 DEBUG nova.network.neutron [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Successfully updated port: faf484ac-094d-4505-a5ff-b8f5b82ac0cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.129 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.130 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquired lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.130 189391 DEBUG nova.network.neutron [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.209 189391 DEBUG nova.compute.manager [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received event network-changed-faf484ac-094d-4505-a5ff-b8f5b82ac0cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.210 189391 DEBUG nova.compute.manager [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Refreshing instance network info cache due to event network-changed-faf484ac-094d-4505-a5ff-b8f5b82ac0cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.211 189391 DEBUG oslo_concurrency.lockutils [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.283 189391 DEBUG nova.network.neutron [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.649 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.663 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.663 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.665 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.665 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.689 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.690 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.690 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.691 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.731 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.777 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.868 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.869 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.960 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:24 compute-0 nova_compute[189387]: 2025-11-26 23:20:24.961 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.055 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.057 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.139 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.624 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.626 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5237MB free_disk=72.38589859008789GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.627 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.627 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.696 189391 DEBUG nova.network.neutron [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.720 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Releasing lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.720 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Instance network_info: |[{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.721 189391 DEBUG oslo_concurrency.lockutils [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.721 189391 DEBUG nova.network.neutron [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Refreshing network info cache for port faf484ac-094d-4505-a5ff-b8f5b82ac0cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.727 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Start _get_guest_xml network_info=[{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '422f324f-e13a-4c74-ba29-023e791ed636'}], 'ephemerals': [{'size': 1, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.737 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.737 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.737 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.737 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.748 189391 WARNING nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.756 189391 DEBUG nova.virt.libvirt.host [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.757 189391 DEBUG nova.virt.libvirt.host [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.762 189391 DEBUG nova.virt.libvirt.host [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.763 189391 DEBUG nova.virt.libvirt.host [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.763 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.764 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:17:57Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='abcd883d-a9af-4dee-93ae-b5623bc853b6',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.764 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.764 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.764 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.765 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.765 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.765 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.765 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.765 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.766 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.766 189391 DEBUG nova.virt.hardware [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.770 189391 DEBUG nova.virt.libvirt.vif [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:20:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp',id=2,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-9dg0j52v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:20:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzY5MDUwODQ3NjMxNjk0NTU2MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 26 23:20:25 compute-0 nova_compute[189387]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzY5MDUwODQ3NjMxNjk0NTU2MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=0d344cef-8e34-4a0c-b747-b8f1f12bbe26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.770 189391 DEBUG nova.network.os_vif_util [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.771 189391 DEBUG nova.network.os_vif_util [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:64:1d,bridge_name='br-int',has_traffic_filtering=True,id=faf484ac-094d-4505-a5ff-b8f5b82ac0cf,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfaf484ac-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.772 189391 DEBUG nova.objects.instance [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.795 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <uuid>0d344cef-8e34-4a0c-b747-b8f1f12bbe26</uuid>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <name>instance-00000002</name>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <memory>524288</memory>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <nova:name>vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp</nova:name>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:20:25</nova:creationTime>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <nova:flavor name="m1.small">
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:memory>512</nova:memory>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:user uuid="6ad061874c77438db2e6d8efb2b1400b">admin</nova:user>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:project uuid="dd2e793599b6418881c391df7f71e0c6">admin</nova:project>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="422f324f-e13a-4c74-ba29-023e791ed636"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        <nova:port uuid="faf484ac-094d-4505-a5ff-b8f5b82ac0cf">
Nov 26 23:20:25 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="192.168.0.173" ipVersion="4"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <system>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <entry name="serial">0d344cef-8e34-4a0c-b747-b8f1f12bbe26</entry>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <entry name="uuid">0d344cef-8e34-4a0c-b747-b8f1f12bbe26</entry>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </system>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <os>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  </os>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <features>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  </features>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <target dev="vdb" bus="virtio"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.config"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:22:64:1d"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <target dev="tapfaf484ac-09"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/console.log" append="off"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <video>
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </video>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:20:25 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:20:25 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:20:25 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:20:25 compute-0 nova_compute[189387]: </domain>
Nov 26 23:20:25 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.796 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Preparing to wait for external event network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.796 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.796 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.796 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.797 189391 DEBUG nova.virt.libvirt.vif [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:20:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp',id=2,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-9dg0j52v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:20:21Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzY5MDUwODQ3NjMxNjk0NTU2MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.797 189391 DEBUG nova.network.os_vif_util [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.798 189391 DEBUG nova.network.os_vif_util [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:64:1d,bridge_name='br-int',has_traffic_filtering=True,id=faf484ac-094d-4505-a5ff-b8f5b82ac0cf,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfaf484ac-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.798 189391 DEBUG os_vif [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:64:1d,bridge_name='br-int',has_traffic_filtering=True,id=faf484ac-094d-4505-a5ff-b8f5b82ac0cf,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfaf484ac-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.801 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.802 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.802 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.805 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.805 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaf484ac-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.806 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaf484ac-09, col_values=(('external_ids', {'iface-id': 'faf484ac-094d-4505-a5ff-b8f5b82ac0cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:64:1d', 'vm-uuid': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.808 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:25 compute-0 NetworkManager[56227]: <info>  [1764199225.8092] manager: (tapfaf484ac-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.809 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.816 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.817 189391 INFO os_vif [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:64:1d,bridge_name='br-int',has_traffic_filtering=True,id=faf484ac-094d-4505-a5ff-b8f5b82ac0cf,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfaf484ac-09')#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.833 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.869 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.923 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:20:25 compute-0 nova_compute[189387]: 2025-11-26 23:20:25.923 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:25 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:20:25.770 189391 DEBUG nova.virt.libvirt.vif [None req-65199cfd-01 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.058 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.058 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.059 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.059 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No VIF found with MAC fa:16:3e:22:64:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.059 189391 INFO nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Using config drive#033[00m
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.815 189391 INFO nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Creating config drive at /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.config#033[00m
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.829 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjja2nrko execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:20:26 compute-0 nova_compute[189387]: 2025-11-26 23:20:26.975 189391 DEBUG oslo_concurrency.processutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjja2nrko" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:20:27 compute-0 kernel: tapfaf484ac-09: entered promiscuous mode
Nov 26 23:20:27 compute-0 NetworkManager[56227]: <info>  [1764199227.0883] manager: (tapfaf484ac-09): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.094 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:27 compute-0 ovn_controller[97697]: 2025-11-26T23:20:27Z|00035|binding|INFO|Claiming lport faf484ac-094d-4505-a5ff-b8f5b82ac0cf for this chassis.
Nov 26 23:20:27 compute-0 ovn_controller[97697]: 2025-11-26T23:20:27Z|00036|binding|INFO|faf484ac-094d-4505-a5ff-b8f5b82ac0cf: Claiming fa:16:3e:22:64:1d 192.168.0.173
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.108 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:64:1d 192.168.0.173'], port_security=['fa:16:3e:22:64:1d 192.168.0.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nvijrfhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-port-a64xkohxh7fv', 'neutron:cidrs': '192.168.0.173/24', 'neutron:device_id': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nvijrfhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-port-a64xkohxh7fv', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=faf484ac-094d-4505-a5ff-b8f5b82ac0cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.111 106595 INFO neutron.agent.ovn.metadata.agent [-] Port faf484ac-094d-4505-a5ff-b8f5b82ac0cf in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 bound to our chassis#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.113 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554#033[00m
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.128 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:27 compute-0 ovn_controller[97697]: 2025-11-26T23:20:27Z|00037|binding|INFO|Setting lport faf484ac-094d-4505-a5ff-b8f5b82ac0cf ovn-installed in OVS
Nov 26 23:20:27 compute-0 ovn_controller[97697]: 2025-11-26T23:20:27Z|00038|binding|INFO|Setting lport faf484ac-094d-4505-a5ff-b8f5b82ac0cf up in Southbound
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.137 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.145 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[6681e6e3-9da2-459a-9eda-8e363f7c7d2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:20:27 compute-0 systemd-machined[155674]: New machine qemu-2-instance-00000002.
Nov 26 23:20:27 compute-0 systemd-udevd[240434]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:20:27 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 26 23:20:27 compute-0 NetworkManager[56227]: <info>  [1764199227.1874] device (tapfaf484ac-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:20:27 compute-0 NetworkManager[56227]: <info>  [1764199227.1883] device (tapfaf484ac-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.190 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e20c6a52-8377-4371-8f93-0c6114353676]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.194 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd44172-2ce4-407b-8a7f-0e19dbbd0bd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.233 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[9d9c5742-6f52-4e67-a0a2-7eb538afdfc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.254 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d5018688-af5a-4480-8af4-7c9367f5f857]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 38268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240442, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.279 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[076e16be-aea2-48d4-838a-fe017f1cac9e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383460, 'tstamp': 383460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240446, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383463, 'tstamp': 383463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240446, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.282 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.284 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.285 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.286 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c31f2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.287 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.288 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16c31f2c-50, col_values=(('external_ids', {'iface-id': 'fcca7a28-5262-4637-8ef9-d543dee768b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:20:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:20:27.289 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.383 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.383 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.383 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:27 compute-0 nova_compute[189387]: 2025-11-26 23:20:27.384 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.011 189391 DEBUG nova.compute.manager [req-1ca48463-56c5-4062-833c-baf335e151cf req-e0cd5da2-3881-498c-af83-9436ae181484 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received event network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.011 189391 DEBUG oslo_concurrency.lockutils [req-1ca48463-56c5-4062-833c-baf335e151cf req-e0cd5da2-3881-498c-af83-9436ae181484 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.012 189391 DEBUG oslo_concurrency.lockutils [req-1ca48463-56c5-4062-833c-baf335e151cf req-e0cd5da2-3881-498c-af83-9436ae181484 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.012 189391 DEBUG oslo_concurrency.lockutils [req-1ca48463-56c5-4062-833c-baf335e151cf req-e0cd5da2-3881-498c-af83-9436ae181484 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.012 189391 DEBUG nova.compute.manager [req-1ca48463-56c5-4062-833c-baf335e151cf req-e0cd5da2-3881-498c-af83-9436ae181484 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Processing event network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.070 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199228.069989, 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.070 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] VM Started (Lifecycle Event)#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.073 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.077 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.084 189391 INFO nova.virt.libvirt.driver [-] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Instance spawned successfully.#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.084 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.091 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.097 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.106 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.107 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.107 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.107 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.108 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.108 189391 DEBUG nova.virt.libvirt.driver [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.114 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.114 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199228.0740206, 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.114 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.140 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.145 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199228.0767543, 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.146 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.153 189391 INFO nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Took 6.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.154 189391 DEBUG nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.164 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.171 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.173 189391 DEBUG nova.network.neutron [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updated VIF entry in instance network info cache for port faf484ac-094d-4505-a5ff-b8f5b82ac0cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.174 189391 DEBUG nova.network.neutron [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.209 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.230 189391 DEBUG oslo_concurrency.lockutils [req-ab7ddc5c-30c6-48d0-b4a2-62fb74d42137 req-f28bbd08-cde3-4ca3-be19-86ed61f0c2fb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.246 189391 INFO nova.compute.manager [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Took 7.60 seconds to build instance.#033[00m
Nov 26 23:20:28 compute-0 nova_compute[189387]: 2025-11-26 23:20:28.275 189391 DEBUG oslo_concurrency.lockutils [None req-65199cfd-011e-4bbd-8d44-f4b406e3d234 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:28 compute-0 podman[240455]: 2025-11-26 23:20:28.889760853 +0000 UTC m=+0.176735453 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 23:20:29 compute-0 nova_compute[189387]: 2025-11-26 23:20:29.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:20:29 compute-0 nova_compute[189387]: 2025-11-26 23:20:29.734 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:29 compute-0 podman[203621]: time="2025-11-26T23:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:20:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:20:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 26 23:20:30 compute-0 nova_compute[189387]: 2025-11-26 23:20:30.102 189391 DEBUG nova.compute.manager [req-8b0df0ee-9446-4ec6-8b35-1b304c65857e req-9e9612ff-e567-4723-b6f9-1b074de7bbdb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received event network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:20:30 compute-0 nova_compute[189387]: 2025-11-26 23:20:30.103 189391 DEBUG oslo_concurrency.lockutils [req-8b0df0ee-9446-4ec6-8b35-1b304c65857e req-9e9612ff-e567-4723-b6f9-1b074de7bbdb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:20:30 compute-0 nova_compute[189387]: 2025-11-26 23:20:30.103 189391 DEBUG oslo_concurrency.lockutils [req-8b0df0ee-9446-4ec6-8b35-1b304c65857e req-9e9612ff-e567-4723-b6f9-1b074de7bbdb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:20:30 compute-0 nova_compute[189387]: 2025-11-26 23:20:30.104 189391 DEBUG oslo_concurrency.lockutils [req-8b0df0ee-9446-4ec6-8b35-1b304c65857e req-9e9612ff-e567-4723-b6f9-1b074de7bbdb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:20:30 compute-0 nova_compute[189387]: 2025-11-26 23:20:30.105 189391 DEBUG nova.compute.manager [req-8b0df0ee-9446-4ec6-8b35-1b304c65857e req-9e9612ff-e567-4723-b6f9-1b074de7bbdb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] No waiting events found dispatching network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:20:30 compute-0 nova_compute[189387]: 2025-11-26 23:20:30.106 189391 WARNING nova.compute.manager [req-8b0df0ee-9446-4ec6-8b35-1b304c65857e req-9e9612ff-e567-4723-b6f9-1b074de7bbdb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received unexpected event network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf for instance with vm_state active and task_state None.#033[00m
Nov 26 23:20:30 compute-0 nova_compute[189387]: 2025-11-26 23:20:30.809 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:31 compute-0 openstack_network_exporter[205787]: ERROR   23:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:20:31 compute-0 openstack_network_exporter[205787]: ERROR   23:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:20:31 compute-0 openstack_network_exporter[205787]: ERROR   23:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:20:31 compute-0 openstack_network_exporter[205787]: ERROR   23:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:20:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:20:31 compute-0 openstack_network_exporter[205787]: ERROR   23:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:20:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:20:31 compute-0 podman[240484]: 2025-11-26 23:20:31.822347983 +0000 UTC m=+0.092668233 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 23:20:31 compute-0 podman[240491]: 2025-11-26 23:20:31.834401606 +0000 UTC m=+0.096428231 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:20:31 compute-0 podman[240485]: 2025-11-26 23:20:31.868237213 +0000 UTC m=+0.123187905 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 23:20:31 compute-0 podman[240483]: 2025-11-26 23:20:31.868790637 +0000 UTC m=+0.139157859 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:20:31 compute-0 podman[240482]: 2025-11-26 23:20:31.870639064 +0000 UTC m=+0.146393695 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, container_name=kepler, io.openshift.expose-services=, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, io.openshift.tags=base rhel9)
Nov 26 23:20:34 compute-0 nova_compute[189387]: 2025-11-26 23:20:34.736 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:35 compute-0 nova_compute[189387]: 2025-11-26 23:20:35.814 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.840 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.841 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.843 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:20:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:36.855 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:20:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:37.303 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/0d344cef-8e34-4a0c-b747-b8f1f12bbe26 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:20:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:37.797 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 26 Nov 2025 23:20:37 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-1015acbf-b09f-4c5c-a045-f1fa04deb6dd x-openstack-request-id: req-1015acbf-b09f-4c5c-a045-f1fa04deb6dd _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:20:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:37.797 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "0d344cef-8e34-4a0c-b747-b8f1f12bbe26", "name": "vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp", "status": "ACTIVE", "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "user_id": "6ad061874c77438db2e6d8efb2b1400b", "metadata": {"metering.server_group": "6ec897c5-079b-468e-ab49-e7a7350f9bc9"}, "hostId": "78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f", "image": {"id": "422f324f-e13a-4c74-ba29-023e791ed636", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/422f324f-e13a-4c74-ba29-023e791ed636"}]}, "flavor": {"id": "abcd883d-a9af-4dee-93ae-b5623bc853b6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/abcd883d-a9af-4dee-93ae-b5623bc853b6"}]}, "created": "2025-11-26T23:20:18Z", "updated": "2025-11-26T23:20:28Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.173", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:22:64:1d"}, {"version": 4, "addr": "192.168.122.185", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:22:64:1d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/0d344cef-8e34-4a0c-b747-b8f1f12bbe26"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/0d344cef-8e34-4a0c-b747-b8f1f12bbe26"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T23:20:28.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:20:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:37.797 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/0d344cef-8e34-4a0c-b747-b8f1f12bbe26 used request id req-1015acbf-b09f-4c5c-a045-f1fa04deb6dd request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:20:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:37.800 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26', 'name': 'vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:20:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:37.803 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 3214d9e6-3c61-49f0-a353-01201a6aa6db from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:20:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:37.803 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/3214d9e6-3c61-49f0-a353-01201a6aa6db -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.169 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1848 Content-Type: application/json Date: Wed, 26 Nov 2025 23:20:37 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-1234f1bb-d782-4ece-ac87-0039532f99da x-openstack-request-id: req-1234f1bb-d782-4ece-ac87-0039532f99da _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.170 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "3214d9e6-3c61-49f0-a353-01201a6aa6db", "name": "test_0", "status": "ACTIVE", "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "user_id": "6ad061874c77438db2e6d8efb2b1400b", "metadata": {}, "hostId": "78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f", "image": {"id": "422f324f-e13a-4c74-ba29-023e791ed636", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/422f324f-e13a-4c74-ba29-023e791ed636"}]}, "flavor": {"id": "abcd883d-a9af-4dee-93ae-b5623bc853b6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/abcd883d-a9af-4dee-93ae-b5623bc853b6"}]}, "created": "2025-11-26T23:18:57Z", "updated": "2025-11-26T23:19:09Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bf:c7:ca"}, {"version": 4, "addr": "192.168.122.212", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:bf:c7:ca"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/3214d9e6-3c61-49f0-a353-01201a6aa6db"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/3214d9e6-3c61-49f0-a353-01201a6aa6db"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T23:19:09.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.171 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/3214d9e6-3c61-49f0-a353-01201a6aa6db used request id req-1234f1bb-d782-4ece-ac87-0039532f99da request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.173 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:20:38.174881) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:20:38.180714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.186 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 / tapfaf484ac-09 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.187 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.191 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 3214d9e6-3c61-49f0-a353-01201a6aa6db / tap3109b207-2f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.191 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.192 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.193 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.194 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.194 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.195 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:20:38.194551) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.196 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.197 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.197 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.198 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.199 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:20:38.198289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.200 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.201 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:20:38.203073) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.231 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/cpu volume: 9800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.258 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 33690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.260 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.263 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.263 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.264 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.266 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:20:38.262661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.269 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:20:38.269905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.304 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.305 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.306 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.334 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.335 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.336 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.341 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.342 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:20:38.340775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.346 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.348 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.350 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.352 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.353 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:20:38.348068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:20:38.354588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.355 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.357 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26: ceilometer.compute.pollsters.NoVolumeException
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.358 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.359 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.359 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.360 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:20:38.362545) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.364 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.364 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.366 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.366 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.367 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.367 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.368 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T23:20:38.367453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.368 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.368 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.369 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.370 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.370 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.372 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:20:38.371550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.372 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.373 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.374 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.374 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.375 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:20:38.376028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.482 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.484 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.485 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.553 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.555 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.556 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.557 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.558 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.558 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.559 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.560 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.561 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.561 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.562 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:20:38.560704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.563 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.564 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.565 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.565 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.566 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.567 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.568 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.570 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.571 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.572 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:20:38.567988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.573 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.576 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.576 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:20:38.575713) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.577 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 728187344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.578 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.579 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 4598589 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.580 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.581 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.582 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.584 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:20:38.586744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.588 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.589 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.590 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.591 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.592 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.592 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.594 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.595 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.596 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:20:38.595123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.596 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.597 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.597 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.598 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.598 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.599 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.600 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.600 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.600 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:20:38.601874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.602 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.603 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.604 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.604 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.605 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.606 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.606 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.608 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.609 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.610 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:20:38.608244) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.610 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.611 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.612 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.612 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.614 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.615 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.615 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.616 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.616 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:20:38.615175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.617 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.617 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.618 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.618 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.619 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.620 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.620 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.621 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.622 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.622 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:20:38.619009) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.626 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:20:38.625612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.627 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.627 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.628 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.629 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.630 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.630 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.630 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.631 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.631 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:20:38.629256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T23:20:38.633245) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.634 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.634 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:20:38.640 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:20:39 compute-0 nova_compute[189387]: 2025-11-26 23:20:39.738 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:40 compute-0 nova_compute[189387]: 2025-11-26 23:20:40.817 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:42 compute-0 podman[240576]: 2025-11-26 23:20:42.80817048 +0000 UTC m=+0.102100159 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 23:20:44 compute-0 nova_compute[189387]: 2025-11-26 23:20:44.740 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:45 compute-0 nova_compute[189387]: 2025-11-26 23:20:45.821 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:46 compute-0 podman[240596]: 2025-11-26 23:20:46.821835598 +0000 UTC m=+0.103529405 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:20:49 compute-0 nova_compute[189387]: 2025-11-26 23:20:49.742 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:50 compute-0 nova_compute[189387]: 2025-11-26 23:20:50.824 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:51 compute-0 podman[240618]: 2025-11-26 23:20:51.833247112 +0000 UTC m=+0.122282231 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 26 23:20:54 compute-0 nova_compute[189387]: 2025-11-26 23:20:54.743 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:55 compute-0 nova_compute[189387]: 2025-11-26 23:20:55.828 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:57 compute-0 ovn_controller[97697]: 2025-11-26T23:20:57Z|00039|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 26 23:20:59 compute-0 podman[203621]: time="2025-11-26T23:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:20:59 compute-0 nova_compute[189387]: 2025-11-26 23:20:59.754 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:20:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:20:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 26 23:20:59 compute-0 podman[240637]: 2025-11-26 23:20:59.856618635 +0000 UTC m=+0.146876248 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 23:21:00 compute-0 nova_compute[189387]: 2025-11-26 23:21:00.833 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:01 compute-0 openstack_network_exporter[205787]: ERROR   23:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:21:01 compute-0 openstack_network_exporter[205787]: ERROR   23:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:21:01 compute-0 openstack_network_exporter[205787]: ERROR   23:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:21:01 compute-0 openstack_network_exporter[205787]: ERROR   23:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:21:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:21:01 compute-0 openstack_network_exporter[205787]: ERROR   23:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:21:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:21:02 compute-0 podman[240676]: 2025-11-26 23:21:02.833330697 +0000 UTC m=+0.104501539 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:21:02 compute-0 podman[240683]: 2025-11-26 23:21:02.842991188 +0000 UTC m=+0.088403293 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:21:02 compute-0 podman[240675]: 2025-11-26 23:21:02.86156617 +0000 UTC m=+0.144237680 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, version=9.4, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Nov 26 23:21:02 compute-0 podman[240679]: 2025-11-26 23:21:02.862146605 +0000 UTC m=+0.127749763 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 26 23:21:02 compute-0 podman[240686]: 2025-11-26 23:21:02.867456003 +0000 UTC m=+0.114950721 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350)
Nov 26 23:21:02 compute-0 ovn_controller[97697]: 2025-11-26T23:21:02Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:64:1d 192.168.0.173
Nov 26 23:21:02 compute-0 ovn_controller[97697]: 2025-11-26T23:21:02Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:64:1d 192.168.0.173
Nov 26 23:21:04 compute-0 nova_compute[189387]: 2025-11-26 23:21:04.747 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:05 compute-0 nova_compute[189387]: 2025-11-26 23:21:05.835 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:21:09.623 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:21:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:21:09.624 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:21:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:21:09.625 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:21:09 compute-0 nova_compute[189387]: 2025-11-26 23:21:09.751 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:10 compute-0 nova_compute[189387]: 2025-11-26 23:21:10.839 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:13 compute-0 podman[240770]: 2025-11-26 23:21:13.857160541 +0000 UTC m=+0.130092124 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 26 23:21:14 compute-0 nova_compute[189387]: 2025-11-26 23:21:14.755 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:15 compute-0 nova_compute[189387]: 2025-11-26 23:21:15.842 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:17 compute-0 podman[240789]: 2025-11-26 23:21:17.821754033 +0000 UTC m=+0.103446663 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:21:19 compute-0 nova_compute[189387]: 2025-11-26 23:21:19.759 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:20 compute-0 nova_compute[189387]: 2025-11-26 23:21:20.845 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:22 compute-0 podman[240813]: 2025-11-26 23:21:22.79495089 +0000 UTC m=+0.084203544 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:21:23 compute-0 nova_compute[189387]: 2025-11-26 23:21:23.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:23 compute-0 nova_compute[189387]: 2025-11-26 23:21:23.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:21:23 compute-0 nova_compute[189387]: 2025-11-26 23:21:23.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:21:23 compute-0 nova_compute[189387]: 2025-11-26 23:21:23.727 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:21:23 compute-0 nova_compute[189387]: 2025-11-26 23:21:23.728 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:21:23 compute-0 nova_compute[189387]: 2025-11-26 23:21:23.729 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:21:23 compute-0 nova_compute[189387]: 2025-11-26 23:21:23.729 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:21:24 compute-0 nova_compute[189387]: 2025-11-26 23:21:24.764 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.305 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.317 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.318 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.318 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.319 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.343 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.344 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.345 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.346 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.459 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.580 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.581 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.679 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.681 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.739 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.743 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.838 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.849 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.852 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.941 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:25 compute-0 nova_compute[189387]: 2025-11-26 23:21:25.943 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.002 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.004 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.089 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.091 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.156 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.622 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.624 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5079MB free_disk=72.36176681518555GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.625 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.625 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.706 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.707 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.707 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.708 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.767 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.781 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.809 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:21:26 compute-0 nova_compute[189387]: 2025-11-26 23:21:26.810 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:21:28 compute-0 nova_compute[189387]: 2025-11-26 23:21:28.615 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:28 compute-0 nova_compute[189387]: 2025-11-26 23:21:28.617 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:28 compute-0 nova_compute[189387]: 2025-11-26 23:21:28.618 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:28 compute-0 nova_compute[189387]: 2025-11-26 23:21:28.618 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:21:29 compute-0 podman[203621]: time="2025-11-26T23:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:21:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:21:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 26 23:21:29 compute-0 nova_compute[189387]: 2025-11-26 23:21:29.767 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:30 compute-0 nova_compute[189387]: 2025-11-26 23:21:30.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:30 compute-0 nova_compute[189387]: 2025-11-26 23:21:30.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:30 compute-0 nova_compute[189387]: 2025-11-26 23:21:30.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:30 compute-0 nova_compute[189387]: 2025-11-26 23:21:30.854 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:30 compute-0 podman[240858]: 2025-11-26 23:21:30.891981853 +0000 UTC m=+0.170792819 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 26 23:21:31 compute-0 openstack_network_exporter[205787]: ERROR   23:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:21:31 compute-0 openstack_network_exporter[205787]: ERROR   23:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:21:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:21:31 compute-0 openstack_network_exporter[205787]: ERROR   23:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:21:31 compute-0 openstack_network_exporter[205787]: ERROR   23:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:21:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:21:31 compute-0 openstack_network_exporter[205787]: ERROR   23:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:21:33 compute-0 nova_compute[189387]: 2025-11-26 23:21:33.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:21:33 compute-0 podman[240883]: 2025-11-26 23:21:33.82254347 +0000 UTC m=+0.107655052 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=kepler, release-0.7.12=, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, architecture=x86_64, io.buildah.version=1.29.0, name=ubi9)
Nov 26 23:21:33 compute-0 podman[240884]: 2025-11-26 23:21:33.833308869 +0000 UTC m=+0.121556862 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:21:33 compute-0 podman[240885]: 2025-11-26 23:21:33.840812193 +0000 UTC m=+0.123347058 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 26 23:21:33 compute-0 podman[240887]: 2025-11-26 23:21:33.847280002 +0000 UTC m=+0.121514302 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1755695350, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:21:33 compute-0 podman[240886]: 2025-11-26 23:21:33.858771769 +0000 UTC m=+0.126153801 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Nov 26 23:21:34 compute-0 nova_compute[189387]: 2025-11-26 23:21:34.769 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:35 compute-0 nova_compute[189387]: 2025-11-26 23:21:35.857 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:39 compute-0 nova_compute[189387]: 2025-11-26 23:21:39.773 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:40 compute-0 nova_compute[189387]: 2025-11-26 23:21:40.862 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:44 compute-0 nova_compute[189387]: 2025-11-26 23:21:44.776 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:44 compute-0 podman[240980]: 2025-11-26 23:21:44.796729873 +0000 UTC m=+0.124168450 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:21:45 compute-0 nova_compute[189387]: 2025-11-26 23:21:45.867 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:48 compute-0 podman[241002]: 2025-11-26 23:21:48.828955462 +0000 UTC m=+0.110559628 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:21:49 compute-0 nova_compute[189387]: 2025-11-26 23:21:49.780 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:50 compute-0 nova_compute[189387]: 2025-11-26 23:21:50.871 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:53 compute-0 podman[241026]: 2025-11-26 23:21:53.809192509 +0000 UTC m=+0.095886627 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 23:21:54 compute-0 nova_compute[189387]: 2025-11-26 23:21:54.786 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:55 compute-0 nova_compute[189387]: 2025-11-26 23:21:55.875 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:21:59 compute-0 podman[203621]: time="2025-11-26T23:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:21:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:21:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 26 23:21:59 compute-0 nova_compute[189387]: 2025-11-26 23:21:59.785 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:00 compute-0 nova_compute[189387]: 2025-11-26 23:22:00.879 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:01 compute-0 openstack_network_exporter[205787]: ERROR   23:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:22:01 compute-0 openstack_network_exporter[205787]: ERROR   23:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:22:01 compute-0 openstack_network_exporter[205787]: ERROR   23:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:22:01 compute-0 openstack_network_exporter[205787]: ERROR   23:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:22:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:22:01 compute-0 openstack_network_exporter[205787]: ERROR   23:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:22:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:22:01 compute-0 podman[241046]: 2025-11-26 23:22:01.87675413 +0000 UTC m=+0.159107127 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 23:22:04 compute-0 nova_compute[189387]: 2025-11-26 23:22:04.788 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:04 compute-0 podman[241072]: 2025-11-26 23:22:04.818094956 +0000 UTC m=+0.082239391 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:22:04 compute-0 podman[241071]: 2025-11-26 23:22:04.830682043 +0000 UTC m=+0.088455507 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:22:04 compute-0 podman[241070]: 2025-11-26 23:22:04.868617307 +0000 UTC m=+0.136034619 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.openshift.expose-services=, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler)
Nov 26 23:22:04 compute-0 podman[241073]: 2025-11-26 23:22:04.882641263 +0000 UTC m=+0.129272170 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 23:22:04 compute-0 podman[241084]: 2025-11-26 23:22:04.891761756 +0000 UTC m=+0.129141155 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.openshift.tags=minimal rhel9, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter)
Nov 26 23:22:05 compute-0 nova_compute[189387]: 2025-11-26 23:22:05.884 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:22:09.624 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:22:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:22:09.625 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:22:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:22:09.625 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:22:09 compute-0 nova_compute[189387]: 2025-11-26 23:22:09.789 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:10 compute-0 nova_compute[189387]: 2025-11-26 23:22:10.888 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:14 compute-0 nova_compute[189387]: 2025-11-26 23:22:14.792 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:15 compute-0 podman[241162]: 2025-11-26 23:22:15.76696867 +0000 UTC m=+0.066299825 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:22:15 compute-0 nova_compute[189387]: 2025-11-26 23:22:15.892 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:19 compute-0 podman[241183]: 2025-11-26 23:22:19.794071558 +0000 UTC m=+0.085900809 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:22:19 compute-0 nova_compute[189387]: 2025-11-26 23:22:19.796 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:20 compute-0 nova_compute[189387]: 2025-11-26 23:22:20.895 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:23 compute-0 nova_compute[189387]: 2025-11-26 23:22:23.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:23 compute-0 nova_compute[189387]: 2025-11-26 23:22:23.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:22:23 compute-0 nova_compute[189387]: 2025-11-26 23:22:23.763 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:22:23 compute-0 nova_compute[189387]: 2025-11-26 23:22:23.764 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:22:23 compute-0 nova_compute[189387]: 2025-11-26 23:22:23.765 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:22:24 compute-0 nova_compute[189387]: 2025-11-26 23:22:24.797 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:24 compute-0 podman[241210]: 2025-11-26 23:22:24.820586069 +0000 UTC m=+0.102297307 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.393 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.413 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.414 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.414 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.415 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.435 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.435 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.436 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.436 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.521 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.574 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.576 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.629 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.631 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.683 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.684 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.735 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.741 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.796 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.798 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.856 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.859 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.899 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.916 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.918 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:22:25 compute-0 nova_compute[189387]: 2025-11-26 23:22:25.990 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.291 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.293 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5039MB free_disk=72.36176681518555GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.293 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.294 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.391 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.392 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.393 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.393 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.453 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.469 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.471 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:22:26 compute-0 nova_compute[189387]: 2025-11-26 23:22:26.471 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:22:28 compute-0 nova_compute[189387]: 2025-11-26 23:22:28.182 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:28 compute-0 nova_compute[189387]: 2025-11-26 23:22:28.184 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:22:29 compute-0 nova_compute[189387]: 2025-11-26 23:22:29.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:29 compute-0 podman[203621]: time="2025-11-26T23:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:22:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:22:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 26 23:22:29 compute-0 nova_compute[189387]: 2025-11-26 23:22:29.800 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:30 compute-0 nova_compute[189387]: 2025-11-26 23:22:30.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:30 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 23:22:30 compute-0 nova_compute[189387]: 2025-11-26 23:22:30.904 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:31 compute-0 nova_compute[189387]: 2025-11-26 23:22:31.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:31 compute-0 nova_compute[189387]: 2025-11-26 23:22:31.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:31 compute-0 openstack_network_exporter[205787]: ERROR   23:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:22:31 compute-0 openstack_network_exporter[205787]: ERROR   23:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:22:31 compute-0 openstack_network_exporter[205787]: ERROR   23:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:22:31 compute-0 openstack_network_exporter[205787]: ERROR   23:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:22:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:22:31 compute-0 openstack_network_exporter[205787]: ERROR   23:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:22:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:22:32 compute-0 nova_compute[189387]: 2025-11-26 23:22:32.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:22:32 compute-0 podman[241255]: 2025-11-26 23:22:32.815195041 +0000 UTC m=+0.116413305 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 26 23:22:34 compute-0 nova_compute[189387]: 2025-11-26 23:22:34.802 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:35 compute-0 podman[241295]: 2025-11-26 23:22:35.828378639 +0000 UTC m=+0.091777166 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 23:22:35 compute-0 podman[241282]: 2025-11-26 23:22:35.831747489 +0000 UTC m=+0.112267354 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:22:35 compute-0 podman[241283]: 2025-11-26 23:22:35.834460431 +0000 UTC m=+0.106068308 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible)
Nov 26 23:22:35 compute-0 podman[241281]: 2025-11-26 23:22:35.836619659 +0000 UTC m=+0.122799427 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:22:35 compute-0 podman[241280]: 2025-11-26 23:22:35.857794976 +0000 UTC m=+0.145979937 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc.)
Nov 26 23:22:35 compute-0 nova_compute[189387]: 2025-11-26 23:22:35.908 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.841 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.841 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.841 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.842 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.845 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.853 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26', 'name': 'vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.861 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.863 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:22:36.864445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.866 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.873 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.874 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.876 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:22:36.875015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.871 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>], 'network.incoming.packets': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.875 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.878 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{'inspect_vnics': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>], 'network.incoming.packets': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.881 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{'inspect_vnics': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>], 'network.incoming.packets': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.882 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{'inspect_vnics': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>], 'network.incoming.packets': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}], and discovery cache [{'local_instances': [<NovaLikeServer: vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp>, <NovaLikeServer: test_0>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.888 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.896 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.899 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:22:36.901070) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.904 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.906 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.906 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:22:36.906994) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.909 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.911 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.913 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:22:36.915641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.954 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/cpu volume: 88350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.982 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 35520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.983 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.984 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.985 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.985 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.985 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.986 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.987 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.987 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.988 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.989 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:22:36.985826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:36.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:22:36.989776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.023 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.024 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.025 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.060 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.061 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.061 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.062 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.062 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes volume: 4624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.063 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.065 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.065 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.065 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.066 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:22:37.062771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.067 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes.delta volume: 4624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.067 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.069 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.070 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/memory.usage volume: 49.08984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.071 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.072 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.073 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.074 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.074 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.075 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.077 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.077 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:22:37.066463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:22:37.070384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:22:37.073723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.079 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.080 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.081 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:22:37.078023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:22:37.082392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.170 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.171 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.172 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.262 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.263 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.263 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.265 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.265 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:22:37.266395) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.266 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.268 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.269 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.272 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.272 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.275 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:22:37.272199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.276 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.276 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.276 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 933784002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:22:37.276178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.277 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 144704360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.277 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 114761007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.278 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.279 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.279 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.280 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.280 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.281 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.281 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.281 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.282 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.282 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.283 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.284 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.284 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.285 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.286 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.286 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:22:37.282257) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.287 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:22:37.288875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.289 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.290 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.291 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.291 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.292 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.293 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.294 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.294 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.295 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:22:37.295669) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.296 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.297 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.297 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.298 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.299 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.299 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.300 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.300 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.301 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.301 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.302 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:22:37.302265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.303 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.304 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.304 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.305 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.306 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.307 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.309 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:22:37.308563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.310 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.313 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 2743280218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.313 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 15877212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.314 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.314 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.315 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.315 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.316 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:22:37.312565) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.317 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes.delta volume: 4759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.318 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.319 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:22:37.317605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.319 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.320 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.320 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:22:37.320420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.320 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.321 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.321 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.322 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.322 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.322 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.324 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.324 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.325 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.325 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.325 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.325 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.325 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.326 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.326 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.326 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.326 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.326 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.327 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.328 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:22:37.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:22:39 compute-0 nova_compute[189387]: 2025-11-26 23:22:39.806 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:40 compute-0 nova_compute[189387]: 2025-11-26 23:22:40.912 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:44 compute-0 nova_compute[189387]: 2025-11-26 23:22:44.808 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:45 compute-0 nova_compute[189387]: 2025-11-26 23:22:45.916 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:46 compute-0 podman[241375]: 2025-11-26 23:22:46.820840299 +0000 UTC m=+0.108973897 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:22:49 compute-0 nova_compute[189387]: 2025-11-26 23:22:49.811 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:50 compute-0 podman[241398]: 2025-11-26 23:22:50.793156231 +0000 UTC m=+0.085895649 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:22:50 compute-0 nova_compute[189387]: 2025-11-26 23:22:50.918 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:54 compute-0 nova_compute[189387]: 2025-11-26 23:22:54.813 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:55 compute-0 podman[241421]: 2025-11-26 23:22:55.805753159 +0000 UTC m=+0.092725692 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:22:55 compute-0 nova_compute[189387]: 2025-11-26 23:22:55.921 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:22:59 compute-0 podman[203621]: time="2025-11-26T23:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:22:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:22:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 26 23:22:59 compute-0 nova_compute[189387]: 2025-11-26 23:22:59.815 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:00 compute-0 nova_compute[189387]: 2025-11-26 23:23:00.924 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:01 compute-0 openstack_network_exporter[205787]: ERROR   23:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:23:01 compute-0 openstack_network_exporter[205787]: ERROR   23:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:23:01 compute-0 openstack_network_exporter[205787]: ERROR   23:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:23:01 compute-0 openstack_network_exporter[205787]: ERROR   23:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:23:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:23:01 compute-0 openstack_network_exporter[205787]: ERROR   23:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:23:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:23:03 compute-0 podman[241441]: 2025-11-26 23:23:03.901711642 +0000 UTC m=+0.191402771 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 23:23:04 compute-0 nova_compute[189387]: 2025-11-26 23:23:04.817 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:05 compute-0 nova_compute[189387]: 2025-11-26 23:23:05.928 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:06 compute-0 podman[241469]: 2025-11-26 23:23:06.826016403 +0000 UTC m=+0.096775440 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 23:23:06 compute-0 podman[241471]: 2025-11-26 23:23:06.832368642 +0000 UTC m=+0.094674533 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Nov 26 23:23:06 compute-0 podman[241467]: 2025-11-26 23:23:06.840025048 +0000 UTC m=+0.121667706 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, version=9.4, config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Nov 26 23:23:06 compute-0 podman[241468]: 2025-11-26 23:23:06.857039973 +0000 UTC m=+0.136000229 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:23:06 compute-0 podman[241470]: 2025-11-26 23:23:06.870478132 +0000 UTC m=+0.138269029 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 23:23:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:09.626 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:09.627 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:09.628 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:09 compute-0 nova_compute[189387]: 2025-11-26 23:23:09.820 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:10 compute-0 nova_compute[189387]: 2025-11-26 23:23:10.932 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:14 compute-0 nova_compute[189387]: 2025-11-26 23:23:14.824 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:15 compute-0 nova_compute[189387]: 2025-11-26 23:23:15.937 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:17 compute-0 podman[241561]: 2025-11-26 23:23:17.837557513 +0000 UTC m=+0.125186580 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 23:23:19 compute-0 nova_compute[189387]: 2025-11-26 23:23:19.827 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:20 compute-0 nova_compute[189387]: 2025-11-26 23:23:20.942 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:21 compute-0 podman[241582]: 2025-11-26 23:23:21.821239999 +0000 UTC m=+0.100627793 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:23:23 compute-0 nova_compute[189387]: 2025-11-26 23:23:23.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:23 compute-0 nova_compute[189387]: 2025-11-26 23:23:23.128 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:23:23 compute-0 nova_compute[189387]: 2025-11-26 23:23:23.129 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:23:23 compute-0 nova_compute[189387]: 2025-11-26 23:23:23.887 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:23:23 compute-0 nova_compute[189387]: 2025-11-26 23:23:23.888 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:23:23 compute-0 nova_compute[189387]: 2025-11-26 23:23:23.889 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:23:23 compute-0 nova_compute[189387]: 2025-11-26 23:23:23.890 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:23:24 compute-0 nova_compute[189387]: 2025-11-26 23:23:24.831 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:25 compute-0 nova_compute[189387]: 2025-11-26 23:23:25.265 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:23:25 compute-0 nova_compute[189387]: 2025-11-26 23:23:25.281 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:23:25 compute-0 nova_compute[189387]: 2025-11-26 23:23:25.282 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:23:25 compute-0 nova_compute[189387]: 2025-11-26 23:23:25.284 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:25 compute-0 nova_compute[189387]: 2025-11-26 23:23:25.947 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.154 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.155 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.156 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.267 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.370 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.373 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.436 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.438 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.499 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.501 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.563 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.576 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.638 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.641 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.742 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.744 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.822 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.823 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:26 compute-0 podman[241620]: 2025-11-26 23:23:26.833741495 +0000 UTC m=+0.116370524 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 23:23:26 compute-0 nova_compute[189387]: 2025-11-26 23:23:26.881 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.204 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.205 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=72.36176681518555GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.206 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.206 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.264 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.265 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.265 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.265 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.312 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.333 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.334 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:23:27 compute-0 nova_compute[189387]: 2025-11-26 23:23:27.335 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:29 compute-0 nova_compute[189387]: 2025-11-26 23:23:29.335 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:29 compute-0 nova_compute[189387]: 2025-11-26 23:23:29.336 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:29 compute-0 nova_compute[189387]: 2025-11-26 23:23:29.337 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:23:29 compute-0 podman[203621]: time="2025-11-26T23:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:23:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:23:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 26 23:23:29 compute-0 nova_compute[189387]: 2025-11-26 23:23:29.835 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:30 compute-0 nova_compute[189387]: 2025-11-26 23:23:30.951 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:31 compute-0 nova_compute[189387]: 2025-11-26 23:23:31.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:31 compute-0 openstack_network_exporter[205787]: ERROR   23:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:23:31 compute-0 openstack_network_exporter[205787]: ERROR   23:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:23:31 compute-0 openstack_network_exporter[205787]: ERROR   23:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:23:31 compute-0 openstack_network_exporter[205787]: ERROR   23:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:23:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:23:31 compute-0 openstack_network_exporter[205787]: ERROR   23:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:23:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:23:32 compute-0 nova_compute[189387]: 2025-11-26 23:23:32.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:32 compute-0 nova_compute[189387]: 2025-11-26 23:23:32.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:32 compute-0 nova_compute[189387]: 2025-11-26 23:23:32.129 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:34 compute-0 nova_compute[189387]: 2025-11-26 23:23:34.837 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:34 compute-0 podman[241648]: 2025-11-26 23:23:34.872445937 +0000 UTC m=+0.165323733 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:23:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:35.034 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:23:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:35.036 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:23:35 compute-0 nova_compute[189387]: 2025-11-26 23:23:35.052 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:35 compute-0 nova_compute[189387]: 2025-11-26 23:23:35.957 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:36 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:36.038 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:23:37 compute-0 podman[241674]: 2025-11-26 23:23:37.813760622 +0000 UTC m=+0.107087285 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, architecture=x86_64, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc.)
Nov 26 23:23:37 compute-0 podman[241676]: 2025-11-26 23:23:37.836723257 +0000 UTC m=+0.115345147 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:23:37 compute-0 podman[241675]: 2025-11-26 23:23:37.852359155 +0000 UTC m=+0.128780726 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:23:37 compute-0 podman[241682]: 2025-11-26 23:23:37.864127499 +0000 UTC m=+0.136995446 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:23:37 compute-0 podman[241683]: 2025-11-26 23:23:37.871944749 +0000 UTC m=+0.131051677 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7)
Nov 26 23:23:38 compute-0 nova_compute[189387]: 2025-11-26 23:23:38.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:23:39 compute-0 nova_compute[189387]: 2025-11-26 23:23:39.840 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:40 compute-0 nova_compute[189387]: 2025-11-26 23:23:40.967 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.426 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.427 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.447 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.527 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.529 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.540 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.541 189391 INFO nova.compute.claims [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.704 189391 DEBUG nova.compute.provider_tree [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.726 189391 DEBUG nova.scheduler.client.report [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.752 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.754 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.812 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.813 189391 DEBUG nova.network.neutron [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.842 189391 INFO nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.884 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.987 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.989 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.990 189391 INFO nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Creating image(s)#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.992 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.992 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:42 compute-0 nova_compute[189387]: 2025-11-26 23:23:42.994 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.021 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.115 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.117 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "88820ed9476b98465b4ed33781797613b42e7ead" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.119 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.143 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.205 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.207 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.285 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk 1073741824" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.287 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.288 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.365 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.368 189391 DEBUG nova.virt.disk.api [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Checking if we can resize image /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.369 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.432 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.434 189391 DEBUG nova.virt.disk.api [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Cannot resize image /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.435 189391 DEBUG nova.objects.instance [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.460 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.461 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.463 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.491 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.563 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.564 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.565 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.581 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.636 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.638 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.679 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.681 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.682 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.739 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.741 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.742 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Ensure instance console log exists: /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.743 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.744 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:43 compute-0 nova_compute[189387]: 2025-11-26 23:23:43.745 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.099 189391 DEBUG nova.network.neutron [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Successfully updated port: c5ede21d-87b7-4215-9363-b5863725bc1e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.119 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.120 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquired lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.120 189391 DEBUG nova.network.neutron [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.203 189391 DEBUG nova.compute.manager [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received event network-changed-c5ede21d-87b7-4215-9363-b5863725bc1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.204 189391 DEBUG nova.compute.manager [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Refreshing instance network info cache due to event network-changed-c5ede21d-87b7-4215-9363-b5863725bc1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.205 189391 DEBUG oslo_concurrency.lockutils [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.255 189391 DEBUG nova.network.neutron [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.844 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.936 189391 DEBUG nova.network.neutron [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updating instance_info_cache with network_info: [{"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.960 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Releasing lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.961 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Instance network_info: |[{"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.962 189391 DEBUG oslo_concurrency.lockutils [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.963 189391 DEBUG nova.network.neutron [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Refreshing network info cache for port c5ede21d-87b7-4215-9363-b5863725bc1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.968 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Start _get_guest_xml network_info=[{"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '422f324f-e13a-4c74-ba29-023e791ed636'}], 'ephemerals': [{'size': 1, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.981 189391 WARNING nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.997 189391 DEBUG nova.virt.libvirt.host [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:23:44 compute-0 nova_compute[189387]: 2025-11-26 23:23:44.998 189391 DEBUG nova.virt.libvirt.host [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.005 189391 DEBUG nova.virt.libvirt.host [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.006 189391 DEBUG nova.virt.libvirt.host [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.007 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.008 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:17:57Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='abcd883d-a9af-4dee-93ae-b5623bc853b6',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.009 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.010 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.010 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.011 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.012 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.012 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.013 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.014 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.015 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.016 189391 DEBUG nova.virt.hardware [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.023 189391 DEBUG nova.virt.libvirt.vif [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj',id=3,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-1qhi57sg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:23:42Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTM5MjgxNDM0NDY2NjM3MTI4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 26 23:23:45 compute-0 nova_compute[189387]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTM5MjgxNDM0NDY2NjM3MTI4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.024 189391 DEBUG nova.network.os_vif_util [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.025 189391 DEBUG nova.network.os_vif_util [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=c5ede21d-87b7-4215-9363-b5863725bc1e,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc5ede21d-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.026 189391 DEBUG nova.objects.instance [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.042 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <uuid>2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd</uuid>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <name>instance-00000003</name>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <memory>524288</memory>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <nova:name>vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj</nova:name>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:23:44</nova:creationTime>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <nova:flavor name="m1.small">
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:memory>512</nova:memory>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:user uuid="6ad061874c77438db2e6d8efb2b1400b">admin</nova:user>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:project uuid="dd2e793599b6418881c391df7f71e0c6">admin</nova:project>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="422f324f-e13a-4c74-ba29-023e791ed636"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        <nova:port uuid="c5ede21d-87b7-4215-9363-b5863725bc1e">
Nov 26 23:23:45 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="192.168.0.214" ipVersion="4"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <system>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <entry name="serial">2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd</entry>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <entry name="uuid">2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd</entry>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </system>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <os>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  </os>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <features>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  </features>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <target dev="vdb" bus="virtio"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.config"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:d8:b5:86"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <target dev="tapc5ede21d-87"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/console.log" append="off"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <video>
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </video>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:23:45 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:23:45 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:23:45 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:23:45 compute-0 nova_compute[189387]: </domain>
Nov 26 23:23:45 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.043 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Preparing to wait for external event network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.044 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.044 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.045 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.045 189391 DEBUG nova.virt.libvirt.vif [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj',id=3,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-1qhi57sg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:23:42Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTM5MjgxNDM0NDY2NjM3MTI4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 26 23:23:45 compute-0 nova_compute[189387]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTM5MjgxNDM0NDY2NjM3MTI4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.046 189391 DEBUG nova.network.os_vif_util [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.047 189391 DEBUG nova.network.os_vif_util [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d8:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=c5ede21d-87b7-4215-9363-b5863725bc1e,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc5ede21d-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.047 189391 DEBUG os_vif [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=c5ede21d-87b7-4215-9363-b5863725bc1e,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc5ede21d-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.048 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.048 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.049 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.057 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.057 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5ede21d-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.058 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc5ede21d-87, col_values=(('external_ids', {'iface-id': 'c5ede21d-87b7-4215-9363-b5863725bc1e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d8:b5:86', 'vm-uuid': '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:23:45 compute-0 NetworkManager[56227]: <info>  [1764199425.0622] manager: (tapc5ede21d-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.068 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.076 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.077 189391 INFO os_vif [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d8:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=c5ede21d-87b7-4215-9363-b5863725bc1e,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc5ede21d-87')#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.141 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.141 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.142 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.143 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No VIF found with MAC fa:16:3e:d8:b5:86, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.144 189391 INFO nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Using config drive#033[00m
Nov 26 23:23:45 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:23:45.023 189391 DEBUG nova.virt.libvirt.vif [None req-dc189023-b7 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:23:45 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:23:45.045 189391 DEBUG nova.virt.libvirt.vif [None req-dc189023-b7 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.472 189391 INFO nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Creating config drive at /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.config#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.485 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdfx9b52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.635 189391 DEBUG oslo_concurrency.processutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprdfx9b52" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:23:45 compute-0 kernel: tapc5ede21d-87: entered promiscuous mode
Nov 26 23:23:45 compute-0 NetworkManager[56227]: <info>  [1764199425.7379] manager: (tapc5ede21d-87): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 26 23:23:45 compute-0 ovn_controller[97697]: 2025-11-26T23:23:45Z|00040|binding|INFO|Claiming lport c5ede21d-87b7-4215-9363-b5863725bc1e for this chassis.
Nov 26 23:23:45 compute-0 ovn_controller[97697]: 2025-11-26T23:23:45Z|00041|binding|INFO|c5ede21d-87b7-4215-9363-b5863725bc1e: Claiming fa:16:3e:d8:b5:86 192.168.0.214
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.753 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.760 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 ovn_controller[97697]: 2025-11-26T23:23:45Z|00042|binding|INFO|Setting lport c5ede21d-87b7-4215-9363-b5863725bc1e ovn-installed in OVS
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.781 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:b5:86 192.168.0.214'], port_security=['fa:16:3e:d8:b5:86 192.168.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nvijrfhdmirp-runjo4u2h7na-he3onrrerp7p-port-ah5ptqkcbqsc', 'neutron:cidrs': '192.168.0.214/24', 'neutron:device_id': '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nvijrfhdmirp-runjo4u2h7na-he3onrrerp7p-port-ah5ptqkcbqsc', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=c5ede21d-87b7-4215-9363-b5863725bc1e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:23:45 compute-0 ovn_controller[97697]: 2025-11-26T23:23:45Z|00043|binding|INFO|Setting lport c5ede21d-87b7-4215-9363-b5863725bc1e up in Southbound
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.783 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.791 106595 INFO neutron.agent.ovn.metadata.agent [-] Port c5ede21d-87b7-4215-9363-b5863725bc1e in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 bound to our chassis#033[00m
Nov 26 23:23:45 compute-0 systemd-udevd[241815]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.794 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554#033[00m
Nov 26 23:23:45 compute-0 systemd-machined[155674]: New machine qemu-3-instance-00000003.
Nov 26 23:23:45 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 26 23:23:45 compute-0 NetworkManager[56227]: <info>  [1764199425.8186] device (tapc5ede21d-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:23:45 compute-0 NetworkManager[56227]: <info>  [1764199425.8273] device (tapc5ede21d-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.825 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d734de5c-1940-41c2-b744-573e1dfb3759]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.865 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b6cf4809-1de0-4432-9c3e-be28018ee91d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.871 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[48fcfa89-7e89-4508-8898-a8c0f7377da0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.920 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[54658364-0426-48b4-87a1-35ab968a00a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.946 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b215a8-9fb2-4bfd-b195-fa32ec5aec1a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 38268, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241830, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.972 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[6fb747aa-eef4-4893-9d2e-487d1a0e80ab]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383460, 'tstamp': 383460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241831, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383463, 'tstamp': 383463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241831, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.975 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.978 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.983 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c31f2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:23:45 compute-0 nova_compute[189387]: 2025-11-26 23:23:45.983 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.984 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.985 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16c31f2c-50, col_values=(('external_ids', {'iface-id': 'fcca7a28-5262-4637-8ef9-d543dee768b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:23:45 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:23:45.986 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.426 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199426.425468, 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.427 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] VM Started (Lifecycle Event)#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.454 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.463 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199426.4255855, 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.464 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.492 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.501 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:23:46 compute-0 nova_compute[189387]: 2025-11-26 23:23:46.541 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:23:48 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 23:23:48 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 23:23:48 compute-0 podman[241839]: 2025-11-26 23:23:48.318713942 +0000 UTC m=+0.145153594 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3)
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.696 189391 DEBUG nova.compute.manager [req-3b57ec5b-e275-4a1b-bb28-46f42020d146 req-bf5a0ecf-0360-439d-a0a5-a517ad7f69d4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received event network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.699 189391 DEBUG oslo_concurrency.lockutils [req-3b57ec5b-e275-4a1b-bb28-46f42020d146 req-bf5a0ecf-0360-439d-a0a5-a517ad7f69d4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.700 189391 DEBUG oslo_concurrency.lockutils [req-3b57ec5b-e275-4a1b-bb28-46f42020d146 req-bf5a0ecf-0360-439d-a0a5-a517ad7f69d4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.701 189391 DEBUG oslo_concurrency.lockutils [req-3b57ec5b-e275-4a1b-bb28-46f42020d146 req-bf5a0ecf-0360-439d-a0a5-a517ad7f69d4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.702 189391 DEBUG nova.compute.manager [req-3b57ec5b-e275-4a1b-bb28-46f42020d146 req-bf5a0ecf-0360-439d-a0a5-a517ad7f69d4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Processing event network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.704 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.711 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199428.7105997, 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.712 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.717 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.726 189391 INFO nova.virt.libvirt.driver [-] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Instance spawned successfully.#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.727 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.742 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.748 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.766 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.767 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.769 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.771 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.773 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.775 189391 DEBUG nova.virt.libvirt.driver [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.783 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.843 189391 DEBUG nova.network.neutron [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updated VIF entry in instance network info cache for port c5ede21d-87b7-4215-9363-b5863725bc1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.845 189391 DEBUG nova.network.neutron [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updating instance_info_cache with network_info: [{"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.852 189391 INFO nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Took 5.86 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.853 189391 DEBUG nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.867 189391 DEBUG oslo_concurrency.lockutils [req-68576cff-a0f9-47fb-b95e-649b6a0adc07 req-deb597eb-8783-4d15-83d5-a660706459ea f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:23:48 compute-0 nova_compute[189387]: 2025-11-26 23:23:48.934 189391 INFO nova.compute.manager [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Took 6.44 seconds to build instance.#033[00m
Nov 26 23:23:49 compute-0 nova_compute[189387]: 2025-11-26 23:23:49.116 189391 DEBUG oslo_concurrency.lockutils [None req-dc189023-b795-4c59-baae-f1d58940d61e 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:49 compute-0 nova_compute[189387]: 2025-11-26 23:23:49.846 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:50 compute-0 nova_compute[189387]: 2025-11-26 23:23:50.060 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:50 compute-0 nova_compute[189387]: 2025-11-26 23:23:50.812 189391 DEBUG nova.compute.manager [req-3a8e6b6e-2ad4-428e-8290-cfe608325fe9 req-99026edc-cae4-4f8b-af1b-914c235c8b6d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received event network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:23:50 compute-0 nova_compute[189387]: 2025-11-26 23:23:50.813 189391 DEBUG oslo_concurrency.lockutils [req-3a8e6b6e-2ad4-428e-8290-cfe608325fe9 req-99026edc-cae4-4f8b-af1b-914c235c8b6d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:23:50 compute-0 nova_compute[189387]: 2025-11-26 23:23:50.813 189391 DEBUG oslo_concurrency.lockutils [req-3a8e6b6e-2ad4-428e-8290-cfe608325fe9 req-99026edc-cae4-4f8b-af1b-914c235c8b6d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:23:50 compute-0 nova_compute[189387]: 2025-11-26 23:23:50.814 189391 DEBUG oslo_concurrency.lockutils [req-3a8e6b6e-2ad4-428e-8290-cfe608325fe9 req-99026edc-cae4-4f8b-af1b-914c235c8b6d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:23:50 compute-0 nova_compute[189387]: 2025-11-26 23:23:50.814 189391 DEBUG nova.compute.manager [req-3a8e6b6e-2ad4-428e-8290-cfe608325fe9 req-99026edc-cae4-4f8b-af1b-914c235c8b6d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] No waiting events found dispatching network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:23:50 compute-0 nova_compute[189387]: 2025-11-26 23:23:50.815 189391 WARNING nova.compute.manager [req-3a8e6b6e-2ad4-428e-8290-cfe608325fe9 req-99026edc-cae4-4f8b-af1b-914c235c8b6d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received unexpected event network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e for instance with vm_state active and task_state None.#033[00m
Nov 26 23:23:52 compute-0 podman[241878]: 2025-11-26 23:23:52.832367954 +0000 UTC m=+0.113661251 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:23:54 compute-0 nova_compute[189387]: 2025-11-26 23:23:54.848 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:55 compute-0 nova_compute[189387]: 2025-11-26 23:23:55.062 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:23:57 compute-0 podman[241900]: 2025-11-26 23:23:57.790702031 +0000 UTC m=+0.073985240 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:23:59 compute-0 podman[203621]: time="2025-11-26T23:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:23:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:23:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 26 23:23:59 compute-0 nova_compute[189387]: 2025-11-26 23:23:59.850 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:00 compute-0 nova_compute[189387]: 2025-11-26 23:24:00.066 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:01 compute-0 openstack_network_exporter[205787]: ERROR   23:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:24:01 compute-0 openstack_network_exporter[205787]: ERROR   23:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:24:01 compute-0 openstack_network_exporter[205787]: ERROR   23:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:24:01 compute-0 openstack_network_exporter[205787]: ERROR   23:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:24:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:24:01 compute-0 openstack_network_exporter[205787]: ERROR   23:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:24:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:24:04 compute-0 nova_compute[189387]: 2025-11-26 23:24:04.854 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:05 compute-0 nova_compute[189387]: 2025-11-26 23:24:05.068 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:05 compute-0 podman[241920]: 2025-11-26 23:24:05.86852788 +0000 UTC m=+0.155056859 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 26 23:24:08 compute-0 podman[241948]: 2025-11-26 23:24:08.828051375 +0000 UTC m=+0.102327386 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 26 23:24:08 compute-0 podman[241947]: 2025-11-26 23:24:08.829663938 +0000 UTC m=+0.113731050 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:24:08 compute-0 podman[241949]: 2025-11-26 23:24:08.839582832 +0000 UTC m=+0.112351633 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:24:08 compute-0 podman[241950]: 2025-11-26 23:24:08.844831672 +0000 UTC m=+0.107316959 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 23:24:08 compute-0 podman[241946]: 2025-11-26 23:24:08.86352783 +0000 UTC m=+0.148701221 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 23:24:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:24:09.628 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:24:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:24:09.628 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:24:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:24:09.629 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:24:09 compute-0 nova_compute[189387]: 2025-11-26 23:24:09.857 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:10 compute-0 nova_compute[189387]: 2025-11-26 23:24:10.072 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:14 compute-0 nova_compute[189387]: 2025-11-26 23:24:14.859 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:15 compute-0 nova_compute[189387]: 2025-11-26 23:24:15.074 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:15 compute-0 ovn_controller[97697]: 2025-11-26T23:24:15Z|00044|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Nov 26 23:24:18 compute-0 podman[242037]: 2025-11-26 23:24:18.818268293 +0000 UTC m=+0.102886370 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:24:19 compute-0 nova_compute[189387]: 2025-11-26 23:24:19.862 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:20 compute-0 nova_compute[189387]: 2025-11-26 23:24:20.077 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:21 compute-0 ovn_controller[97697]: 2025-11-26T23:24:21Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d8:b5:86 192.168.0.214
Nov 26 23:24:21 compute-0 ovn_controller[97697]: 2025-11-26T23:24:21Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d8:b5:86 192.168.0.214
Nov 26 23:24:22 compute-0 nova_compute[189387]: 2025-11-26 23:24:22.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:22 compute-0 nova_compute[189387]: 2025-11-26 23:24:22.127 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:24:22 compute-0 nova_compute[189387]: 2025-11-26 23:24:22.151 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:24:23 compute-0 podman[242082]: 2025-11-26 23:24:23.847875185 +0000 UTC m=+0.122058861 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:24:24 compute-0 nova_compute[189387]: 2025-11-26 23:24:24.866 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:25 compute-0 nova_compute[189387]: 2025-11-26 23:24:25.079 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:25 compute-0 nova_compute[189387]: 2025-11-26 23:24:25.151 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:25 compute-0 nova_compute[189387]: 2025-11-26 23:24:25.153 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:24:25 compute-0 nova_compute[189387]: 2025-11-26 23:24:25.868 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:24:25 compute-0 nova_compute[189387]: 2025-11-26 23:24:25.870 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:24:25 compute-0 nova_compute[189387]: 2025-11-26 23:24:25.871 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.463 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.483 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.484 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.486 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.487 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.512 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.514 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.515 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.516 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.648 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.731 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.733 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.809 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.810 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.867 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.869 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.961 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:27 compute-0 nova_compute[189387]: 2025-11-26 23:24:27.977 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.050 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.052 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.148 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.150 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.245 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.247 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.303 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.312 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.374 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.375 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.429 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.430 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.485 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.487 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:24:28 compute-0 nova_compute[189387]: 2025-11-26 23:24:28.545 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:24:28 compute-0 podman[242143]: 2025-11-26 23:24:28.821954179 +0000 UTC m=+0.106217770 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.006 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.008 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4888MB free_disk=72.33919906616211GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.009 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.010 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.246 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.247 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.248 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.249 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.250 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.336 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.440 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.441 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.460 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.498 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.594 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.610 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.634 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.635 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.636 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:29 compute-0 podman[203621]: time="2025-11-26T23:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:24:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:24:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 26 23:24:29 compute-0 nova_compute[189387]: 2025-11-26 23:24:29.870 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:30 compute-0 nova_compute[189387]: 2025-11-26 23:24:30.083 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:30 compute-0 nova_compute[189387]: 2025-11-26 23:24:30.286 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:30 compute-0 nova_compute[189387]: 2025-11-26 23:24:30.287 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:30 compute-0 nova_compute[189387]: 2025-11-26 23:24:30.287 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:24:31 compute-0 openstack_network_exporter[205787]: ERROR   23:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:24:31 compute-0 openstack_network_exporter[205787]: ERROR   23:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:24:31 compute-0 openstack_network_exporter[205787]: ERROR   23:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:24:31 compute-0 openstack_network_exporter[205787]: ERROR   23:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:24:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:24:31 compute-0 openstack_network_exporter[205787]: ERROR   23:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:24:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:24:32 compute-0 nova_compute[189387]: 2025-11-26 23:24:32.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:32 compute-0 nova_compute[189387]: 2025-11-26 23:24:32.128 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:34 compute-0 nova_compute[189387]: 2025-11-26 23:24:34.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:34 compute-0 nova_compute[189387]: 2025-11-26 23:24:34.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:34 compute-0 nova_compute[189387]: 2025-11-26 23:24:34.128 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:24:34 compute-0 nova_compute[189387]: 2025-11-26 23:24:34.129 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:24:34 compute-0 nova_compute[189387]: 2025-11-26 23:24:34.874 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:35 compute-0 nova_compute[189387]: 2025-11-26 23:24:35.098 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.842 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.843 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.852 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.862 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.864 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.870 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.871 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.872 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.873 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:36.874 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:24:36 compute-0 podman[242161]: 2025-11-26 23:24:36.905933222 +0000 UTC m=+0.183613541 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.218 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Wed, 26 Nov 2025 23:24:36 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-91e48a6c-927b-4e60-a8d2-73e71dbf6d37 x-openstack-request-id: req-91e48a6c-927b-4e60-a8d2-73e71dbf6d37 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.218 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd", "name": "vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj", "status": "ACTIVE", "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "user_id": "6ad061874c77438db2e6d8efb2b1400b", "metadata": {"metering.server_group": "6ec897c5-079b-468e-ab49-e7a7350f9bc9"}, "hostId": "78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f", "image": {"id": "422f324f-e13a-4c74-ba29-023e791ed636", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/422f324f-e13a-4c74-ba29-023e791ed636"}]}, "flavor": {"id": "abcd883d-a9af-4dee-93ae-b5623bc853b6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/abcd883d-a9af-4dee-93ae-b5623bc853b6"}]}, "created": "2025-11-26T23:23:40Z", "updated": "2025-11-26T23:23:48Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.214", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:b5:86"}, {"version": 4, "addr": "192.168.122.208", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d8:b5:86"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T23:23:48.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.218 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd used request id req-91e48a6c-927b-4e60-a8d2-73e71dbf6d37 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.220 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd', 'name': 'vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.224 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26', 'name': 'vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.227 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.227 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.228 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.228 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.228 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.229 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:24:38.228200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.229 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.229 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:24:38.229983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.234 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd / tapc5ede21d-87 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.234 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.238 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.241 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.242 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.243 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:24:38.242811) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.243 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.244 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.244 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.244 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.244 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.244 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:24:38.244133) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.244 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.245 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.245 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.245 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.245 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.245 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.246 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:24:38.245890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.271 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/cpu volume: 32250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.291 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/cpu volume: 209490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.313 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 37510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.315 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.315 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.315 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.316 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:24:38.314873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:24:38.316666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.345 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.345 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.346 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.376 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.376 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.377 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.413 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.414 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.414 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.415 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.416 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes volume: 1906 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.416 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:24:38.416276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.416 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes volume: 4694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.417 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.418 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.418 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.418 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.419 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:24:38.418379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/memory.usage volume: 49.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.420 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/memory.usage volume: 49.08984375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.421 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.9453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.422 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.422 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:24:38.420556) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.422 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.423 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.423 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:24:38.422805) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.424 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.424 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.425 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.426 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.426 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj>]
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T23:24:38.425902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:24:38.427408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.427 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.428 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:24:38.429299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.507 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.508 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.508 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.597 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.597 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.597 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.669 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.669 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.670 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.670 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.670 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.670 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.670 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.670 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.670 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.671 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.671 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.671 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.671 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.671 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:24:38.670907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.672 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:24:38.672383) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.673 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 833217718 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.674 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 118947761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.674 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 102487832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.674 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 933784002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.674 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 144704360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.674 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 114761007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.675 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.675 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.675 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.675 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.675 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:24:38.673771) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.676 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.677 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.677 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.677 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.677 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.677 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.678 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.678 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:24:38.676394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.679 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.680 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.681 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.681 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:24:38.679188) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.681 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.681 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.682 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.683 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:24:38.681888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.684 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.685 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:24:38.684459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.685 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.686 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.686 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.686 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.686 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.687 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.687 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.687 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.687 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.687 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.687 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.687 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:24:38.687355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.688 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 2682966537 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.689 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 13192002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.689 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.689 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 2743280218 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.689 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 15877212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.689 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.690 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.690 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.690 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:24:38.688851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.691 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.692 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:24:38.691493) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.693 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.693 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.693 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.693 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.693 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 241 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.693 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.694 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.694 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.694 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.694 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:24:38.692967) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.695 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj>]
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T23:24:38.695770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.696 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.697 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:24:38.698 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:24:39 compute-0 podman[242188]: 2025-11-26 23:24:39.850296562 +0000 UTC m=+0.130386054 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, release-0.7.12=, version=9.4, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 23:24:39 compute-0 podman[242189]: 2025-11-26 23:24:39.852541221 +0000 UTC m=+0.110632597 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:24:39 compute-0 podman[242190]: 2025-11-26 23:24:39.857793092 +0000 UTC m=+0.109667432 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:24:39 compute-0 nova_compute[189387]: 2025-11-26 23:24:39.876 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:39 compute-0 podman[242197]: 2025-11-26 23:24:39.877355203 +0000 UTC m=+0.132606813 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 26 23:24:39 compute-0 podman[242198]: 2025-11-26 23:24:39.897646412 +0000 UTC m=+0.139956308 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 26 23:24:40 compute-0 nova_compute[189387]: 2025-11-26 23:24:40.101 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:44 compute-0 nova_compute[189387]: 2025-11-26 23:24:44.879 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:45 compute-0 nova_compute[189387]: 2025-11-26 23:24:45.104 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:49 compute-0 podman[242285]: 2025-11-26 23:24:49.837885529 +0000 UTC m=+0.123641873 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 26 23:24:49 compute-0 nova_compute[189387]: 2025-11-26 23:24:49.883 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:50 compute-0 nova_compute[189387]: 2025-11-26 23:24:50.107 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:54 compute-0 podman[242304]: 2025-11-26 23:24:54.827856216 +0000 UTC m=+0.112867397 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:24:54 compute-0 nova_compute[189387]: 2025-11-26 23:24:54.885 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:55 compute-0 nova_compute[189387]: 2025-11-26 23:24:55.111 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:24:59 compute-0 podman[203621]: time="2025-11-26T23:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:24:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:24:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4797 "" "Go-http-client/1.1"
Nov 26 23:24:59 compute-0 podman[242328]: 2025-11-26 23:24:59.859449131 +0000 UTC m=+0.142878556 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 23:24:59 compute-0 nova_compute[189387]: 2025-11-26 23:24:59.887 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:00 compute-0 nova_compute[189387]: 2025-11-26 23:25:00.115 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:01 compute-0 openstack_network_exporter[205787]: ERROR   23:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:25:01 compute-0 openstack_network_exporter[205787]: ERROR   23:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:25:01 compute-0 openstack_network_exporter[205787]: ERROR   23:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:25:01 compute-0 openstack_network_exporter[205787]: ERROR   23:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:25:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:25:01 compute-0 openstack_network_exporter[205787]: ERROR   23:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:25:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:25:04 compute-0 nova_compute[189387]: 2025-11-26 23:25:04.890 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:05 compute-0 nova_compute[189387]: 2025-11-26 23:25:05.119 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:07 compute-0 podman[242347]: 2025-11-26 23:25:07.89300318 +0000 UTC m=+0.172651058 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Nov 26 23:25:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:09.629 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:09.630 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:09.631 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:09 compute-0 nova_compute[189387]: 2025-11-26 23:25:09.893 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:10 compute-0 nova_compute[189387]: 2025-11-26 23:25:10.122 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:10 compute-0 podman[242373]: 2025-11-26 23:25:10.841210624 +0000 UTC m=+0.109287221 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:25:10 compute-0 podman[242374]: 2025-11-26 23:25:10.850408938 +0000 UTC m=+0.114722376 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 23:25:10 compute-0 podman[242376]: 2025-11-26 23:25:10.850439629 +0000 UTC m=+0.105899040 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 23:25:10 compute-0 podman[242375]: 2025-11-26 23:25:10.851909518 +0000 UTC m=+0.107531984 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 23:25:10 compute-0 podman[242372]: 2025-11-26 23:25:10.871187632 +0000 UTC m=+0.145298551 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git)
Nov 26 23:25:14 compute-0 nova_compute[189387]: 2025-11-26 23:25:14.896 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:15 compute-0 nova_compute[189387]: 2025-11-26 23:25:15.125 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:19 compute-0 nova_compute[189387]: 2025-11-26 23:25:19.899 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:20 compute-0 nova_compute[189387]: 2025-11-26 23:25:20.128 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:20 compute-0 podman[242468]: 2025-11-26 23:25:20.8099769 +0000 UTC m=+0.101594467 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:25:24 compute-0 nova_compute[189387]: 2025-11-26 23:25:24.902 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.132 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.147 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.147 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.148 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:25:25 compute-0 podman[242486]: 2025-11-26 23:25:25.809943803 +0000 UTC m=+0.099639566 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.896 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.896 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.896 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:25:25 compute-0 nova_compute[189387]: 2025-11-26 23:25:25.897 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:25:27 compute-0 nova_compute[189387]: 2025-11-26 23:25:27.910 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:25:27 compute-0 nova_compute[189387]: 2025-11-26 23:25:27.926 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:25:27 compute-0 nova_compute[189387]: 2025-11-26 23:25:27.927 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:25:27 compute-0 nova_compute[189387]: 2025-11-26 23:25:27.929 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:28 compute-0 nova_compute[189387]: 2025-11-26 23:25:28.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:28 compute-0 nova_compute[189387]: 2025-11-26 23:25:28.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.154 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.156 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.157 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.263 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.356 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.357 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.428 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.429 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.492 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.493 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.558 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.565 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.658 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.660 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.720 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.721 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 podman[203621]: time="2025-11-26T23:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:25:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:25:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.823 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.825 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.904 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.908 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.915 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.977 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:29 compute-0 nova_compute[189387]: 2025-11-26 23:25:29.979 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.041 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.042 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.103 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.104 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.135 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.167 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.470 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.472 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4871MB free_disk=72.3392219543457GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.473 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.474 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.563 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.564 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.564 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.565 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.565 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.661 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.684 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.686 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:25:30 compute-0 nova_compute[189387]: 2025-11-26 23:25:30.687 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:30 compute-0 podman[242547]: 2025-11-26 23:25:30.805528428 +0000 UTC m=+0.102135240 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Nov 26 23:25:31 compute-0 openstack_network_exporter[205787]: ERROR   23:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:25:31 compute-0 openstack_network_exporter[205787]: ERROR   23:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:25:31 compute-0 openstack_network_exporter[205787]: ERROR   23:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:25:31 compute-0 openstack_network_exporter[205787]: ERROR   23:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:25:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:25:31 compute-0 openstack_network_exporter[205787]: ERROR   23:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:25:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:25:32 compute-0 nova_compute[189387]: 2025-11-26 23:25:32.682 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:32 compute-0 nova_compute[189387]: 2025-11-26 23:25:32.684 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:33 compute-0 nova_compute[189387]: 2025-11-26 23:25:33.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:34 compute-0 nova_compute[189387]: 2025-11-26 23:25:34.907 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:35 compute-0 nova_compute[189387]: 2025-11-26 23:25:35.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:35 compute-0 nova_compute[189387]: 2025-11-26 23:25:35.128 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:35 compute-0 nova_compute[189387]: 2025-11-26 23:25:35.139 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:36 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:36.366 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:25:36 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:36.366 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:25:36 compute-0 nova_compute[189387]: 2025-11-26 23:25:36.367 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:38 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:38.369 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:25:38 compute-0 podman[242565]: 2025-11-26 23:25:38.873883345 +0000 UTC m=+0.165969551 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:25:39 compute-0 nova_compute[189387]: 2025-11-26 23:25:39.122 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:25:39 compute-0 nova_compute[189387]: 2025-11-26 23:25:39.914 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:40 compute-0 nova_compute[189387]: 2025-11-26 23:25:40.141 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:41 compute-0 podman[242592]: 2025-11-26 23:25:41.808984089 +0000 UTC m=+0.106814425 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:25:41 compute-0 podman[242598]: 2025-11-26 23:25:41.810755467 +0000 UTC m=+0.090354388 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, io.openshift.expose-services=, architecture=x86_64)
Nov 26 23:25:41 compute-0 podman[242593]: 2025-11-26 23:25:41.83189998 +0000 UTC m=+0.114299635 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:25:41 compute-0 podman[242591]: 2025-11-26 23:25:41.839006099 +0000 UTC m=+0.135596642 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release-0.7.12=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, distribution-scope=public)
Nov 26 23:25:41 compute-0 podman[242594]: 2025-11-26 23:25:41.854323907 +0000 UTC m=+0.132405687 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.626 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.627 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.643 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.757 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.758 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.768 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.769 189391 INFO nova.compute.claims [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.916 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.947 189391 DEBUG nova.compute.provider_tree [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.964 189391 DEBUG nova.scheduler.client.report [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.985 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:44 compute-0 nova_compute[189387]: 2025-11-26 23:25:44.986 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.051 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.052 189391 DEBUG nova.network.neutron [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.074 189391 INFO nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.105 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.145 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.203 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.205 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.205 189391 INFO nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Creating image(s)#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.206 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.207 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.207 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.220 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.298 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.299 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "88820ed9476b98465b4ed33781797613b42e7ead" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.300 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.311 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.365 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.366 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.411 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead,backing_fmt=raw /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.412 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "88820ed9476b98465b4ed33781797613b42e7ead" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.413 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.466 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.468 189391 DEBUG nova.virt.disk.api [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Checking if we can resize image /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.469 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.525 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.527 189391 DEBUG nova.virt.disk.api [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Cannot resize image /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.528 189391 DEBUG nova.objects.instance [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'migration_context' on Instance uuid f0ac9c29-04ba-4737-8af6-8fc91e451e8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.548 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.549 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.551 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.578 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.652 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.654 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.655 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.680 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.739 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.740 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.799 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 1073741824" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.801 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.802 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.865 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.868 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.869 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Ensure instance console log exists: /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.870 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.871 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:45 compute-0 nova_compute[189387]: 2025-11-26 23:25:45.872 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:48 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.795 189391 DEBUG nova.network.neutron [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Successfully updated port: 31b6bc9a-cd65-44ef-96ea-c84d392117c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.818 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.819 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquired lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.820 189391 DEBUG nova.network.neutron [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.887 189391 DEBUG nova.compute.manager [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received event network-changed-31b6bc9a-cd65-44ef-96ea-c84d392117c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.888 189391 DEBUG nova.compute.manager [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Refreshing instance network info cache due to event network-changed-31b6bc9a-cd65-44ef-96ea-c84d392117c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.889 189391 DEBUG oslo_concurrency.lockutils [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:25:48 compute-0 nova_compute[189387]: 2025-11-26 23:25:48.964 189391 DEBUG nova.network.neutron [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:25:49 compute-0 nova_compute[189387]: 2025-11-26 23:25:49.920 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.148 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.926 189391 DEBUG nova.network.neutron [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updating instance_info_cache with network_info: [{"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.949 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Releasing lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.950 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Instance network_info: |[{"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.950 189391 DEBUG oslo_concurrency.lockutils [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.951 189391 DEBUG nova.network.neutron [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Refreshing network info cache for port 31b6bc9a-cd65-44ef-96ea-c84d392117c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.953 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Start _get_guest_xml network_info=[{"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '422f324f-e13a-4c74-ba29-023e791ed636'}], 'ephemerals': [{'size': 1, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.963 189391 WARNING nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.969 189391 DEBUG nova.virt.libvirt.host [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.970 189391 DEBUG nova.virt.libvirt.host [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.980 189391 DEBUG nova.virt.libvirt.host [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.981 189391 DEBUG nova.virt.libvirt.host [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.981 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.982 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:17:57Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='abcd883d-a9af-4dee-93ae-b5623bc853b6',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:17:52Z,direct_url=<?>,disk_format='qcow2',id=422f324f-e13a-4c74-ba29-023e791ed636,min_disk=0,min_ram=0,name='cirros',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:17:53Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.982 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.983 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.983 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.983 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.983 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.984 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.984 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.984 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.985 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.985 189391 DEBUG nova.virt.hardware [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.988 189391 DEBUG nova.virt.libvirt.vif [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3',id=4,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-55cchsee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:25:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzIyMTkxOTAxMTM5NTU5MzQ4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 26 23:25:50 compute-0 nova_compute[189387]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzIyMTkxOTAxMTM5NTU5MzQ4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=f0ac9c29-04ba-4737-8af6-8fc91e451e8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.988 189391 DEBUG nova.network.os_vif_util [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.989 189391 DEBUG nova.network.os_vif_util [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:3f:da,bridge_name='br-int',has_traffic_filtering=True,id=31b6bc9a-cd65-44ef-96ea-c84d392117c8,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31b6bc9a-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:25:50 compute-0 nova_compute[189387]: 2025-11-26 23:25:50.990 189391 DEBUG nova.objects.instance [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid f0ac9c29-04ba-4737-8af6-8fc91e451e8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.007 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <uuid>f0ac9c29-04ba-4737-8af6-8fc91e451e8c</uuid>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <name>instance-00000004</name>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <memory>524288</memory>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <nova:name>vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3</nova:name>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:25:50</nova:creationTime>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <nova:flavor name="m1.small">
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:memory>512</nova:memory>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:user uuid="6ad061874c77438db2e6d8efb2b1400b">admin</nova:user>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:project uuid="dd2e793599b6418881c391df7f71e0c6">admin</nova:project>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="422f324f-e13a-4c74-ba29-023e791ed636"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        <nova:port uuid="31b6bc9a-cd65-44ef-96ea-c84d392117c8">
Nov 26 23:25:51 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="192.168.0.69" ipVersion="4"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <system>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <entry name="serial">f0ac9c29-04ba-4737-8af6-8fc91e451e8c</entry>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <entry name="uuid">f0ac9c29-04ba-4737-8af6-8fc91e451e8c</entry>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </system>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <os>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  </os>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <features>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  </features>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <target dev="vdb" bus="virtio"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.config"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:22:3f:da"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <target dev="tap31b6bc9a-cd"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/console.log" append="off"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <video>
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </video>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:25:51 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:25:51 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:25:51 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:25:51 compute-0 nova_compute[189387]: </domain>
Nov 26 23:25:51 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.008 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Preparing to wait for external event network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.009 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.009 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.009 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.010 189391 DEBUG nova.virt.libvirt.vif [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3',id=4,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-55cchsee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:25:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzIyMTkxOTAxMTM5NTU5MzQ4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 26 23:25:51 compute-0 nova_compute[189387]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzIyMTkxOTAxMTM5NTU5MzQ4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=f0ac9c29-04ba-4737-8af6-8fc91e451e8c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.010 189391 DEBUG nova.network.os_vif_util [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.011 189391 DEBUG nova.network.os_vif_util [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:3f:da,bridge_name='br-int',has_traffic_filtering=True,id=31b6bc9a-cd65-44ef-96ea-c84d392117c8,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31b6bc9a-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.012 189391 DEBUG os_vif [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:3f:da,bridge_name='br-int',has_traffic_filtering=True,id=31b6bc9a-cd65-44ef-96ea-c84d392117c8,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31b6bc9a-cd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.012 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.013 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.013 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.016 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.016 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap31b6bc9a-cd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.017 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap31b6bc9a-cd, col_values=(('external_ids', {'iface-id': '31b6bc9a-cd65-44ef-96ea-c84d392117c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:3f:da', 'vm-uuid': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.019 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:51 compute-0 NetworkManager[56227]: <info>  [1764199551.0208] manager: (tap31b6bc9a-cd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.022 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.032 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.033 189391 INFO os_vif [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:3f:da,bridge_name='br-int',has_traffic_filtering=True,id=31b6bc9a-cd65-44ef-96ea-c84d392117c8,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31b6bc9a-cd')#033[00m
Nov 26 23:25:51 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:25:50.988 189391 DEBUG nova.virt.libvirt.vif [None req-ee089ab6-5b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.104 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.105 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.106 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.106 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No VIF found with MAC fa:16:3e:22:3f:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:25:51 compute-0 nova_compute[189387]: 2025-11-26 23:25:51.107 189391 INFO nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Using config drive#033[00m
Nov 26 23:25:51 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:25:51.010 189391 DEBUG nova.virt.libvirt.vif [None req-ee089ab6-5b [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:25:51 compute-0 podman[242718]: 2025-11-26 23:25:51.863231904 +0000 UTC m=+0.148278669 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.112 189391 INFO nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Creating config drive at /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.config#033[00m
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.125 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9coe5f4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.280 189391 DEBUG oslo_concurrency.processutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9coe5f4" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:25:52 compute-0 kernel: tap31b6bc9a-cd: entered promiscuous mode
Nov 26 23:25:52 compute-0 NetworkManager[56227]: <info>  [1764199552.3964] manager: (tap31b6bc9a-cd): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 26 23:25:52 compute-0 ovn_controller[97697]: 2025-11-26T23:25:52Z|00045|binding|INFO|Claiming lport 31b6bc9a-cd65-44ef-96ea-c84d392117c8 for this chassis.
Nov 26 23:25:52 compute-0 ovn_controller[97697]: 2025-11-26T23:25:52Z|00046|binding|INFO|31b6bc9a-cd65-44ef-96ea-c84d392117c8: Claiming fa:16:3e:22:3f:da 192.168.0.69
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.400 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:52 compute-0 ovn_controller[97697]: 2025-11-26T23:25:52Z|00047|binding|INFO|Setting lport 31b6bc9a-cd65-44ef-96ea-c84d392117c8 ovn-installed in OVS
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.423 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.428 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:52 compute-0 ovn_controller[97697]: 2025-11-26T23:25:52Z|00048|binding|INFO|Setting lport 31b6bc9a-cd65-44ef-96ea-c84d392117c8 up in Southbound
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.455 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:3f:da 192.168.0.69'], port_security=['fa:16:3e:22:3f:da 192.168.0.69'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nvijrfhdmirp-gcwraztym6um-bi3jxhg2edck-port-6sibpc4dfvzn', 'neutron:cidrs': '192.168.0.69/24', 'neutron:device_id': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nvijrfhdmirp-gcwraztym6um-bi3jxhg2edck-port-6sibpc4dfvzn', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=31b6bc9a-cd65-44ef-96ea-c84d392117c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.456 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 31b6bc9a-cd65-44ef-96ea-c84d392117c8 in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 bound to our chassis#033[00m
Nov 26 23:25:52 compute-0 systemd-udevd[242757]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.459 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554#033[00m
Nov 26 23:25:52 compute-0 systemd-machined[155674]: New machine qemu-4-instance-00000004.
Nov 26 23:25:52 compute-0 NetworkManager[56227]: <info>  [1764199552.4749] device (tap31b6bc9a-cd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:25:52 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 26 23:25:52 compute-0 NetworkManager[56227]: <info>  [1764199552.4794] device (tap31b6bc9a-cd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.488 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[906a332f-707f-4511-abf3-96a0a72049f5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.529 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[745d3368-a342-4708-bfca-801e2bc9034b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.532 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a85c70-2328-441f-9e10-3c6702dfa020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.570 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[297a8734-e7de-4fca-8b00-44694049e3cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.597 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[952d4db3-0dc9-4789-807b-d8ccdff8d5c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 28107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242770, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.617 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[9adc47dd-f749-4ea8-9892-9d872f564ef6]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383460, 'tstamp': 383460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242772, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383463, 'tstamp': 383463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242772, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.619 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.620 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.622 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c31f2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.623 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.623 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16c31f2c-50, col_values=(('external_ids', {'iface-id': 'fcca7a28-5262-4637-8ef9-d543dee768b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:25:52 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:25:52.623 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.957 189391 DEBUG nova.network.neutron [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updated VIF entry in instance network info cache for port 31b6bc9a-cd65-44ef-96ea-c84d392117c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.958 189391 DEBUG nova.network.neutron [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updating instance_info_cache with network_info: [{"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:25:52 compute-0 nova_compute[189387]: 2025-11-26 23:25:52.987 189391 DEBUG oslo_concurrency.lockutils [req-2faf51e5-f273-4964-afda-1093c0131bb2 req-afc60965-250e-40f9-8b5e-c434a4687a6a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.106 189391 DEBUG nova.compute.manager [req-fab1755e-372f-4844-afdb-578462cec3df req-b69baa8d-324e-429e-b804-80bd051c5c97 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received event network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.107 189391 DEBUG oslo_concurrency.lockutils [req-fab1755e-372f-4844-afdb-578462cec3df req-b69baa8d-324e-429e-b804-80bd051c5c97 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.108 189391 DEBUG oslo_concurrency.lockutils [req-fab1755e-372f-4844-afdb-578462cec3df req-b69baa8d-324e-429e-b804-80bd051c5c97 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.109 189391 DEBUG oslo_concurrency.lockutils [req-fab1755e-372f-4844-afdb-578462cec3df req-b69baa8d-324e-429e-b804-80bd051c5c97 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.109 189391 DEBUG nova.compute.manager [req-fab1755e-372f-4844-afdb-578462cec3df req-b69baa8d-324e-429e-b804-80bd051c5c97 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Processing event network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.134 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199553.1336482, f0ac9c29-04ba-4737-8af6-8fc91e451e8c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.134 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] VM Started (Lifecycle Event)#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.139 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.147 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.156 189391 INFO nova.virt.libvirt.driver [-] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Instance spawned successfully.#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.157 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.167 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.172 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.186 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.186 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.187 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.187 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.188 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.188 189391 DEBUG nova.virt.libvirt.driver [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.197 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.197 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199553.1338904, f0ac9c29-04ba-4737-8af6-8fc91e451e8c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.198 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.271 189391 INFO nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Took 8.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.271 189391 DEBUG nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.279 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.291 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199553.146419, f0ac9c29-04ba-4737-8af6-8fc91e451e8c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.292 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.323 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.328 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.369 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.385 189391 INFO nova.compute.manager [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Took 8.66 seconds to build instance.#033[00m
Nov 26 23:25:53 compute-0 nova_compute[189387]: 2025-11-26 23:25:53.418 189391 DEBUG oslo_concurrency.lockutils [None req-ee089ab6-5b68-49b9-a1d0-3adafa39da3d 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:54 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 23:25:54 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 23:25:54 compute-0 nova_compute[189387]: 2025-11-26 23:25:54.925 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:55 compute-0 nova_compute[189387]: 2025-11-26 23:25:55.225 189391 DEBUG nova.compute.manager [req-b209e021-6364-45d4-896a-392d07dd0695 req-ce148735-2660-45fa-9859-3978ca443d23 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received event network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:25:55 compute-0 nova_compute[189387]: 2025-11-26 23:25:55.225 189391 DEBUG oslo_concurrency.lockutils [req-b209e021-6364-45d4-896a-392d07dd0695 req-ce148735-2660-45fa-9859-3978ca443d23 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:25:55 compute-0 nova_compute[189387]: 2025-11-26 23:25:55.226 189391 DEBUG oslo_concurrency.lockutils [req-b209e021-6364-45d4-896a-392d07dd0695 req-ce148735-2660-45fa-9859-3978ca443d23 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:25:55 compute-0 nova_compute[189387]: 2025-11-26 23:25:55.226 189391 DEBUG oslo_concurrency.lockutils [req-b209e021-6364-45d4-896a-392d07dd0695 req-ce148735-2660-45fa-9859-3978ca443d23 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:25:55 compute-0 nova_compute[189387]: 2025-11-26 23:25:55.227 189391 DEBUG nova.compute.manager [req-b209e021-6364-45d4-896a-392d07dd0695 req-ce148735-2660-45fa-9859-3978ca443d23 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] No waiting events found dispatching network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:25:55 compute-0 nova_compute[189387]: 2025-11-26 23:25:55.228 189391 WARNING nova.compute.manager [req-b209e021-6364-45d4-896a-392d07dd0695 req-ce148735-2660-45fa-9859-3978ca443d23 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received unexpected event network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:25:56 compute-0 nova_compute[189387]: 2025-11-26 23:25:56.020 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:25:56 compute-0 podman[242799]: 2025-11-26 23:25:56.802627614 +0000 UTC m=+0.086413902 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:25:59 compute-0 podman[203621]: time="2025-11-26T23:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:25:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:25:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:25:59 compute-0 nova_compute[189387]: 2025-11-26 23:25:59.928 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:01 compute-0 nova_compute[189387]: 2025-11-26 23:26:01.023 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:01 compute-0 openstack_network_exporter[205787]: ERROR   23:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:26:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:26:01 compute-0 openstack_network_exporter[205787]: ERROR   23:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:26:01 compute-0 openstack_network_exporter[205787]: ERROR   23:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:26:01 compute-0 openstack_network_exporter[205787]: ERROR   23:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:26:01 compute-0 openstack_network_exporter[205787]: ERROR   23:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:26:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:26:01 compute-0 podman[242822]: 2025-11-26 23:26:01.804981411 +0000 UTC m=+0.092948017 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 23:26:04 compute-0 nova_compute[189387]: 2025-11-26 23:26:04.930 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:06 compute-0 nova_compute[189387]: 2025-11-26 23:26:06.024 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:26:09.630 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:26:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:26:09.631 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:26:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:26:09.631 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:26:09 compute-0 podman[242841]: 2025-11-26 23:26:09.837902894 +0000 UTC m=+0.129887621 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 26 23:26:09 compute-0 nova_compute[189387]: 2025-11-26 23:26:09.932 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:11 compute-0 nova_compute[189387]: 2025-11-26 23:26:11.027 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:12 compute-0 podman[242866]: 2025-11-26 23:26:12.845534342 +0000 UTC m=+0.112808056 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:26:12 compute-0 podman[242868]: 2025-11-26 23:26:12.853043622 +0000 UTC m=+0.096173942 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:26:12 compute-0 podman[242867]: 2025-11-26 23:26:12.872542811 +0000 UTC m=+0.140078851 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:26:12 compute-0 podman[242865]: 2025-11-26 23:26:12.878541501 +0000 UTC m=+0.155579455 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release-0.7.12=, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543)
Nov 26 23:26:12 compute-0 podman[242875]: 2025-11-26 23:26:12.881787997 +0000 UTC m=+0.133506467 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, container_name=openstack_network_exporter, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:26:14 compute-0 nova_compute[189387]: 2025-11-26 23:26:14.935 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:16 compute-0 nova_compute[189387]: 2025-11-26 23:26:16.030 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:19 compute-0 nova_compute[189387]: 2025-11-26 23:26:19.938 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:21 compute-0 nova_compute[189387]: 2025-11-26 23:26:21.034 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:22 compute-0 ovn_controller[97697]: 2025-11-26T23:26:22Z|00049|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Nov 26 23:26:22 compute-0 podman[242961]: 2025-11-26 23:26:22.815350276 +0000 UTC m=+0.116871083 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:26:24 compute-0 nova_compute[189387]: 2025-11-26 23:26:24.940 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:25 compute-0 nova_compute[189387]: 2025-11-26 23:26:25.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:25 compute-0 nova_compute[189387]: 2025-11-26 23:26:25.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:26:25 compute-0 nova_compute[189387]: 2025-11-26 23:26:25.955 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:26:25 compute-0 nova_compute[189387]: 2025-11-26 23:26:25.958 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:26:25 compute-0 nova_compute[189387]: 2025-11-26 23:26:25.959 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:26:26 compute-0 nova_compute[189387]: 2025-11-26 23:26:26.037 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:26 compute-0 ovn_controller[97697]: 2025-11-26T23:26:26Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:3f:da 192.168.0.69
Nov 26 23:26:26 compute-0 ovn_controller[97697]: 2025-11-26T23:26:26Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:3f:da 192.168.0.69
Nov 26 23:26:27 compute-0 podman[242995]: 2025-11-26 23:26:27.809675258 +0000 UTC m=+0.086319397 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:26:28 compute-0 nova_compute[189387]: 2025-11-26 23:26:28.606 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:26:28 compute-0 nova_compute[189387]: 2025-11-26 23:26:28.620 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:26:28 compute-0 nova_compute[189387]: 2025-11-26 23:26:28.621 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:26:28 compute-0 nova_compute[189387]: 2025-11-26 23:26:28.622 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:29 compute-0 podman[203621]: time="2025-11-26T23:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:26:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:26:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:26:29 compute-0 nova_compute[189387]: 2025-11-26 23:26:29.944 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:30 compute-0 nova_compute[189387]: 2025-11-26 23:26:30.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:30 compute-0 nova_compute[189387]: 2025-11-26 23:26:30.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.040 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.163 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.165 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.351 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:31 compute-0 openstack_network_exporter[205787]: ERROR   23:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:26:31 compute-0 openstack_network_exporter[205787]: ERROR   23:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:26:31 compute-0 openstack_network_exporter[205787]: ERROR   23:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:26:31 compute-0 openstack_network_exporter[205787]: ERROR   23:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:26:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:26:31 compute-0 openstack_network_exporter[205787]: ERROR   23:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:26:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.454 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.456 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.551 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.553 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.617 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.619 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.686 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.694 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.795 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.797 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.861 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.864 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.987 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:31 compute-0 nova_compute[189387]: 2025-11-26 23:26:31.988 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.089 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.099 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.186 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.188 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.287 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.290 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.373 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.375 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.436 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.449 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.515 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.517 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.640 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.642 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.762 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.765 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:26:32 compute-0 podman[243063]: 2025-11-26 23:26:32.849202505 +0000 UTC m=+0.123509775 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 23:26:32 compute-0 nova_compute[189387]: 2025-11-26 23:26:32.849 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.319 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.320 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4604MB free_disk=72.3167610168457GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.321 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.321 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.415 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.415 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.416 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.416 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.416 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.416 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.505 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.523 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.548 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:26:33 compute-0 nova_compute[189387]: 2025-11-26 23:26:33.549 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:26:34 compute-0 nova_compute[189387]: 2025-11-26 23:26:34.948 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:35 compute-0 nova_compute[189387]: 2025-11-26 23:26:35.545 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:35 compute-0 nova_compute[189387]: 2025-11-26 23:26:35.547 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:35 compute-0 nova_compute[189387]: 2025-11-26 23:26:35.548 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:35 compute-0 nova_compute[189387]: 2025-11-26 23:26:35.548 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:36 compute-0 nova_compute[189387]: 2025-11-26 23:26:36.045 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:36 compute-0 nova_compute[189387]: 2025-11-26 23:26:36.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.843 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.844 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:36.856 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd', 'name': 'vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:37.153 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26', 'name': 'vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:37.157 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:26:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:37.158 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f0ac9c29-04ba-4737-8af6-8fc91e451e8c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:38.999 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Wed, 26 Nov 2025 23:26:37 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-862bd5f6-3e8b-4b6a-a6ce-c61b305bf4f5 x-openstack-request-id: req-862bd5f6-3e8b-4b6a-a6ce-c61b305bf4f5 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.000 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f0ac9c29-04ba-4737-8af6-8fc91e451e8c", "name": "vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3", "status": "ACTIVE", "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "user_id": "6ad061874c77438db2e6d8efb2b1400b", "metadata": {"metering.server_group": "6ec897c5-079b-468e-ab49-e7a7350f9bc9"}, "hostId": "78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f", "image": {"id": "422f324f-e13a-4c74-ba29-023e791ed636", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/422f324f-e13a-4c74-ba29-023e791ed636"}]}, "flavor": {"id": "abcd883d-a9af-4dee-93ae-b5623bc853b6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/abcd883d-a9af-4dee-93ae-b5623bc853b6"}]}, "created": "2025-11-26T23:25:42Z", "updated": "2025-11-26T23:25:53Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.69", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:22:3f:da"}, {"version": 4, "addr": "192.168.122.192", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:22:3f:da"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f0ac9c29-04ba-4737-8af6-8fc91e451e8c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f0ac9c29-04ba-4737-8af6-8fc91e451e8c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T23:25:53.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.000 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f0ac9c29-04ba-4737-8af6-8fc91e451e8c used request id req-862bd5f6-3e8b-4b6a-a6ce-c61b305bf4f5 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.002 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c', 'name': 'vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.008 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:26:39.009654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.012 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:26:39.013451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.020 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.027 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.034 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f0ac9c29-04ba-4737-8af6-8fc91e451e8c / tap31b6bc9a-cd inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.035 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.044 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.047 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.047 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.049 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:26:39.049303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.052 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.053 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.053 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:26:39.052838) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.054 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.054 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.055 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.055 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.056 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.056 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.056 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.057 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.058 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:26:39.057355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.057 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.099 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/cpu volume: 34220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.142 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/cpu volume: 276930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.187 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/cpu volume: 32900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.237 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 39370000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.238 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.239 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.239 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.239 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.240 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.240 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.241 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.241 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.241 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.241 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:26:39.239150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.242 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:26:39.242029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.285 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.286 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.287 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.327 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.328 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.329 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.371 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.372 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.372 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.410 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.410 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.411 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.413 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:26:39.413625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.414 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes volume: 7172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.415 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes volume: 1821 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.415 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.416 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.416 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.417 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.417 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.417 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes.delta volume: 422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:26:39.417317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.418 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes.delta volume: 2478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.418 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.419 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.420 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.421 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:26:39.420904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.421 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/memory.usage volume: 48.93359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.422 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/memory.usage volume: 49.69921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.422 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.424 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.425 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.425 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.426 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:26:39.424409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.427 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.428 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T23:26:39.428150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.428 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.428 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3>]
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.429 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.429 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.430 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.430 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.431 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:26:39.430012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.432 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.433 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:26:39.433777) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.541 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.542 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.543 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.643 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.644 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.644 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.767 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.767 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.767 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.865 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.866 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.866 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.867 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:26:39.867952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.869 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.869 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.869 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.870 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.871 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.871 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.871 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.871 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.872 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.872 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.873 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.873 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.873 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.873 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 833217718 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.874 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:26:39.871396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.874 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:26:39.873685) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.874 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 118947761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.875 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 102487832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.875 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 933784002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.875 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 144704360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.875 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 114761007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.876 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 1305394210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.876 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 123508779 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.876 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 100732301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.876 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.877 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.877 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.878 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.878 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.878 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.878 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.878 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.879 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.879 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.879 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.880 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.880 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.880 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.880 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.881 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.881 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.881 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.882 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.883 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.883 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.883 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.883 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.883 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.883 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.884 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.884 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.884 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.885 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.885 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.885 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.885 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:26:39.878664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.886 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:26:39.883580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.887 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.887 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.887 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.888 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.888 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.888 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.888 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.888 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.889 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.889 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.889 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.889 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.890 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.890 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.890 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.891 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.891 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.891 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.891 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.892 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.893 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:26:39.888720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.893 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.893 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.893 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.894 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.894 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.894 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.895 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:26:39.893612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.896 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.896 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.896 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.897 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.897 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.897 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.898 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.899 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.900 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.900 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.900 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.901 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.902 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.902 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.903 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.904 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:26:39.900613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.903 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.904 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.904 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 2706733169 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.905 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 13192002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.905 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.905 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 2747561632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.906 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 15877212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:26:39.904740) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.906 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.907 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 2810599848 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.907 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 12954358 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.907 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.907 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.908 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.908 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.909 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:26:39.909721) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.910 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.910 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.910 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.911 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.912 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.912 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.912 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.913 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.913 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.913 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.914 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.914 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.914 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.915 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.915 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.915 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:26:39.912231) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.917 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.917 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3>]
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.918 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T23:26:39.917347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.918 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.919 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.920 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.921 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:26:39.922 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:26:39 compute-0 nova_compute[189387]: 2025-11-26 23:26:39.949 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:40 compute-0 podman[243089]: 2025-11-26 23:26:40.871391668 +0000 UTC m=+0.166461693 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:26:41 compute-0 nova_compute[189387]: 2025-11-26 23:26:41.049 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:43 compute-0 podman[243127]: 2025-11-26 23:26:43.812749477 +0000 UTC m=+0.082969820 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:26:43 compute-0 podman[243131]: 2025-11-26 23:26:43.841452174 +0000 UTC m=+0.099081409 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64)
Nov 26 23:26:43 compute-0 podman[243116]: 2025-11-26 23:26:43.857411199 +0000 UTC m=+0.141394650 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:26:43 compute-0 podman[243122]: 2025-11-26 23:26:43.861527306 +0000 UTC m=+0.128806642 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 26 23:26:43 compute-0 podman[243115]: 2025-11-26 23:26:43.862301087 +0000 UTC m=+0.155087177 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, config_id=edpm, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:26:44 compute-0 nova_compute[189387]: 2025-11-26 23:26:44.955 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:46 compute-0 nova_compute[189387]: 2025-11-26 23:26:46.053 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:49 compute-0 nova_compute[189387]: 2025-11-26 23:26:49.958 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:51 compute-0 nova_compute[189387]: 2025-11-26 23:26:51.055 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:53 compute-0 podman[243214]: 2025-11-26 23:26:53.785263117 +0000 UTC m=+0.082787056 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 23:26:54 compute-0 nova_compute[189387]: 2025-11-26 23:26:54.963 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:56 compute-0 nova_compute[189387]: 2025-11-26 23:26:56.058 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:26:58 compute-0 podman[243234]: 2025-11-26 23:26:58.778581594 +0000 UTC m=+0.075914726 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:26:59 compute-0 podman[203621]: time="2025-11-26T23:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:26:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:26:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Nov 26 23:26:59 compute-0 nova_compute[189387]: 2025-11-26 23:26:59.969 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:01 compute-0 nova_compute[189387]: 2025-11-26 23:27:01.061 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:01 compute-0 openstack_network_exporter[205787]: ERROR   23:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:27:01 compute-0 openstack_network_exporter[205787]: ERROR   23:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:27:01 compute-0 openstack_network_exporter[205787]: ERROR   23:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:27:01 compute-0 openstack_network_exporter[205787]: ERROR   23:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:27:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:27:01 compute-0 openstack_network_exporter[205787]: ERROR   23:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:27:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:27:03 compute-0 podman[243257]: 2025-11-26 23:27:03.859461416 +0000 UTC m=+0.146508684 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:27:04 compute-0 nova_compute[189387]: 2025-11-26 23:27:04.973 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:06 compute-0 nova_compute[189387]: 2025-11-26 23:27:06.066 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:27:09.632 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:27:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:27:09.633 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:27:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:27:09.633 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:27:09 compute-0 nova_compute[189387]: 2025-11-26 23:27:09.975 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:11 compute-0 nova_compute[189387]: 2025-11-26 23:27:11.069 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:11 compute-0 podman[243276]: 2025-11-26 23:27:11.88199147 +0000 UTC m=+0.161463183 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Nov 26 23:27:14 compute-0 podman[243303]: 2025-11-26 23:27:14.805068064 +0000 UTC m=+0.101228085 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:27:14 compute-0 podman[243304]: 2025-11-26 23:27:14.820943247 +0000 UTC m=+0.098912395 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:27:14 compute-0 podman[243302]: 2025-11-26 23:27:14.823405171 +0000 UTC m=+0.111555254 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release=1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 23:27:14 compute-0 podman[243306]: 2025-11-26 23:27:14.823821832 +0000 UTC m=+0.089376057 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Nov 26 23:27:14 compute-0 podman[243305]: 2025-11-26 23:27:14.8252751 +0000 UTC m=+0.113727711 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm)
Nov 26 23:27:14 compute-0 nova_compute[189387]: 2025-11-26 23:27:14.977 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:16 compute-0 nova_compute[189387]: 2025-11-26 23:27:16.072 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:19 compute-0 nova_compute[189387]: 2025-11-26 23:27:19.980 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:21 compute-0 nova_compute[189387]: 2025-11-26 23:27:21.074 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:24 compute-0 podman[243396]: 2025-11-26 23:27:24.86568409 +0000 UTC m=+0.148781364 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 23:27:24 compute-0 nova_compute[189387]: 2025-11-26 23:27:24.984 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:25 compute-0 nova_compute[189387]: 2025-11-26 23:27:25.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:25 compute-0 nova_compute[189387]: 2025-11-26 23:27:25.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:27:26 compute-0 nova_compute[189387]: 2025-11-26 23:27:26.011 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:27:26 compute-0 nova_compute[189387]: 2025-11-26 23:27:26.012 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:27:26 compute-0 nova_compute[189387]: 2025-11-26 23:27:26.013 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:27:26 compute-0 nova_compute[189387]: 2025-11-26 23:27:26.077 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:28 compute-0 nova_compute[189387]: 2025-11-26 23:27:28.783 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updating instance_info_cache with network_info: [{"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:27:28 compute-0 nova_compute[189387]: 2025-11-26 23:27:28.809 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:27:28 compute-0 nova_compute[189387]: 2025-11-26 23:27:28.809 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:27:29 compute-0 nova_compute[189387]: 2025-11-26 23:27:29.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:29 compute-0 podman[203621]: time="2025-11-26T23:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:27:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:27:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 26 23:27:29 compute-0 podman[243414]: 2025-11-26 23:27:29.868975343 +0000 UTC m=+0.145574769 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:27:29 compute-0 nova_compute[189387]: 2025-11-26 23:27:29.985 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:30 compute-0 nova_compute[189387]: 2025-11-26 23:27:30.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:30 compute-0 nova_compute[189387]: 2025-11-26 23:27:30.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.080 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.154 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.155 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.156 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.273 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.368 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.369 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:31 compute-0 openstack_network_exporter[205787]: ERROR   23:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:27:31 compute-0 openstack_network_exporter[205787]: ERROR   23:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:27:31 compute-0 openstack_network_exporter[205787]: ERROR   23:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:27:31 compute-0 openstack_network_exporter[205787]: ERROR   23:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:27:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:27:31 compute-0 openstack_network_exporter[205787]: ERROR   23:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:27:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.458 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.459 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.518 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.520 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.614 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.628 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.733 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.734 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.794 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.795 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.911 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:31 compute-0 nova_compute[189387]: 2025-11-26 23:27:31.912 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.000 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.007 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.101 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.102 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.201 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.203 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.270 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.272 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.335 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.347 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.444 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.447 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.544 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.546 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.641 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.643 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:27:32 compute-0 nova_compute[189387]: 2025-11-26 23:27:32.711 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.263 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.266 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4603MB free_disk=72.3167610168457GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.267 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.268 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.408 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.409 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.409 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.410 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.411 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.412 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.550 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.572 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.576 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:27:33 compute-0 nova_compute[189387]: 2025-11-26 23:27:33.576 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:27:34 compute-0 podman[243485]: 2025-11-26 23:27:34.845178416 +0000 UTC m=+0.122345094 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 23:27:34 compute-0 nova_compute[189387]: 2025-11-26 23:27:34.988 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:35 compute-0 nova_compute[189387]: 2025-11-26 23:27:35.573 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:35 compute-0 nova_compute[189387]: 2025-11-26 23:27:35.574 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:35 compute-0 nova_compute[189387]: 2025-11-26 23:27:35.575 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:36 compute-0 nova_compute[189387]: 2025-11-26 23:27:36.082 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:36 compute-0 nova_compute[189387]: 2025-11-26 23:27:36.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:37 compute-0 nova_compute[189387]: 2025-11-26 23:27:37.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:39 compute-0 nova_compute[189387]: 2025-11-26 23:27:39.993 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:41 compute-0 nova_compute[189387]: 2025-11-26 23:27:41.085 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:42 compute-0 nova_compute[189387]: 2025-11-26 23:27:42.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:27:42 compute-0 podman[243505]: 2025-11-26 23:27:42.838900736 +0000 UTC m=+0.123898695 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 23:27:44 compute-0 nova_compute[189387]: 2025-11-26 23:27:44.995 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:45 compute-0 podman[243531]: 2025-11-26 23:27:45.8752887 +0000 UTC m=+0.149221115 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 23:27:45 compute-0 podman[243533]: 2025-11-26 23:27:45.87990357 +0000 UTC m=+0.148999688 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 26 23:27:45 compute-0 podman[243534]: 2025-11-26 23:27:45.890300081 +0000 UTC m=+0.152832748 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 26 23:27:45 compute-0 podman[243535]: 2025-11-26 23:27:45.890694541 +0000 UTC m=+0.140332043 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, build-date=2025-08-20T13:12:41, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, container_name=openstack_network_exporter, maintainer=Red Hat, Inc.)
Nov 26 23:27:45 compute-0 podman[243532]: 2025-11-26 23:27:45.910179788 +0000 UTC m=+0.179008070 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:27:46 compute-0 nova_compute[189387]: 2025-11-26 23:27:46.090 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:49 compute-0 nova_compute[189387]: 2025-11-26 23:27:49.998 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:51 compute-0 nova_compute[189387]: 2025-11-26 23:27:51.092 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:54 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 23:27:55 compute-0 nova_compute[189387]: 2025-11-26 23:27:55.002 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:55 compute-0 podman[243630]: 2025-11-26 23:27:55.858353721 +0000 UTC m=+0.141959045 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 26 23:27:56 compute-0 nova_compute[189387]: 2025-11-26 23:27:56.097 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:27:59 compute-0 podman[203621]: time="2025-11-26T23:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:27:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:27:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 26 23:28:00 compute-0 nova_compute[189387]: 2025-11-26 23:28:00.005 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:00 compute-0 podman[243649]: 2025-11-26 23:28:00.835896155 +0000 UTC m=+0.116689317 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:28:01 compute-0 nova_compute[189387]: 2025-11-26 23:28:01.101 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:01 compute-0 openstack_network_exporter[205787]: ERROR   23:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:28:01 compute-0 openstack_network_exporter[205787]: ERROR   23:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:28:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:28:01 compute-0 openstack_network_exporter[205787]: ERROR   23:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:28:01 compute-0 openstack_network_exporter[205787]: ERROR   23:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:28:01 compute-0 openstack_network_exporter[205787]: ERROR   23:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:28:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:28:05 compute-0 nova_compute[189387]: 2025-11-26 23:28:05.008 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:05 compute-0 podman[243672]: 2025-11-26 23:28:05.8549351 +0000 UTC m=+0.129677246 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute)
Nov 26 23:28:06 compute-0 nova_compute[189387]: 2025-11-26 23:28:06.116 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:28:09.633 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:28:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:28:09.633 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:28:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:28:09.634 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:28:10 compute-0 nova_compute[189387]: 2025-11-26 23:28:10.012 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:11 compute-0 nova_compute[189387]: 2025-11-26 23:28:11.120 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:13 compute-0 podman[243692]: 2025-11-26 23:28:13.929572272 +0000 UTC m=+0.203856826 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:28:15 compute-0 nova_compute[189387]: 2025-11-26 23:28:15.015 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:16 compute-0 nova_compute[189387]: 2025-11-26 23:28:16.124 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:16 compute-0 podman[243719]: 2025-11-26 23:28:16.819736124 +0000 UTC m=+0.099712466 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 23:28:16 compute-0 podman[243718]: 2025-11-26 23:28:16.827744813 +0000 UTC m=+0.112556901 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:28:16 compute-0 podman[243720]: 2025-11-26 23:28:16.830838173 +0000 UTC m=+0.112397296 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 26 23:28:16 compute-0 podman[243717]: 2025-11-26 23:28:16.849630161 +0000 UTC m=+0.133178096 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, name=ubi9, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 23:28:16 compute-0 podman[243721]: 2025-11-26 23:28:16.867587639 +0000 UTC m=+0.134339077 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 26 23:28:20 compute-0 nova_compute[189387]: 2025-11-26 23:28:20.019 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:21 compute-0 nova_compute[189387]: 2025-11-26 23:28:21.128 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:25 compute-0 nova_compute[189387]: 2025-11-26 23:28:25.021 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:26 compute-0 nova_compute[189387]: 2025-11-26 23:28:26.132 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:26 compute-0 podman[243814]: 2025-11-26 23:28:26.833617062 +0000 UTC m=+0.103772042 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Nov 26 23:28:27 compute-0 nova_compute[189387]: 2025-11-26 23:28:27.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:27 compute-0 nova_compute[189387]: 2025-11-26 23:28:27.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:28:27 compute-0 nova_compute[189387]: 2025-11-26 23:28:27.127 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:28:28 compute-0 nova_compute[189387]: 2025-11-26 23:28:28.067 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:28:28 compute-0 nova_compute[189387]: 2025-11-26 23:28:28.068 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:28:28 compute-0 nova_compute[189387]: 2025-11-26 23:28:28.068 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:28:28 compute-0 nova_compute[189387]: 2025-11-26 23:28:28.069 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:28:29 compute-0 nova_compute[189387]: 2025-11-26 23:28:29.348 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:28:29 compute-0 nova_compute[189387]: 2025-11-26 23:28:29.370 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:28:29 compute-0 nova_compute[189387]: 2025-11-26 23:28:29.371 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:28:29 compute-0 podman[203621]: time="2025-11-26T23:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:28:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:28:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Nov 26 23:28:30 compute-0 nova_compute[189387]: 2025-11-26 23:28:30.025 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:31 compute-0 nova_compute[189387]: 2025-11-26 23:28:31.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:31 compute-0 nova_compute[189387]: 2025-11-26 23:28:31.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:31 compute-0 nova_compute[189387]: 2025-11-26 23:28:31.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:28:31 compute-0 nova_compute[189387]: 2025-11-26 23:28:31.135 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:31 compute-0 openstack_network_exporter[205787]: ERROR   23:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:28:31 compute-0 openstack_network_exporter[205787]: ERROR   23:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:28:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:28:31 compute-0 openstack_network_exporter[205787]: ERROR   23:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:28:31 compute-0 openstack_network_exporter[205787]: ERROR   23:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:28:31 compute-0 openstack_network_exporter[205787]: ERROR   23:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:28:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:28:31 compute-0 podman[243832]: 2025-11-26 23:28:31.823248281 +0000 UTC m=+0.100660470 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.154 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.155 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.282 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.377 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.379 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.471 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.472 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.530 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.532 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.594 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.601 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.682 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.684 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.746 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.748 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.813 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.815 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.908 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.917 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.975 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:32 compute-0 nova_compute[189387]: 2025-11-26 23:28:32.976 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.074 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.076 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.162 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.164 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.235 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.249 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.324 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.326 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.385 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.386 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.483 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.485 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.542 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.984 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.986 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4595MB free_disk=72.3167610168457GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.987 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:28:33 compute-0 nova_compute[189387]: 2025-11-26 23:28:33.988 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.136 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.137 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.138 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.139 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.140 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.141 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.284 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.317 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.319 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:28:34 compute-0 nova_compute[189387]: 2025-11-26 23:28:34.319 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:28:35 compute-0 nova_compute[189387]: 2025-11-26 23:28:35.028 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:35 compute-0 nova_compute[189387]: 2025-11-26 23:28:35.321 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:35 compute-0 nova_compute[189387]: 2025-11-26 23:28:35.322 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:36 compute-0 nova_compute[189387]: 2025-11-26 23:28:36.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:36 compute-0 nova_compute[189387]: 2025-11-26 23:28:36.138 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:36 compute-0 podman[243904]: 2025-11-26 23:28:36.800264132 +0000 UTC m=+0.092613891 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.843 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.844 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.854 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd', 'name': 'vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.857 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26', 'name': 'vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.861 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c', 'name': 'vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.864 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.864 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.865 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.866 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:28:36.864729) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:28:36.865981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.870 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.875 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets volume: 54 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.880 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.883 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.884 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.884 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.884 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.884 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.884 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.884 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.885 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.886 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.886 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.886 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.887 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:28:36.884662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:28:36.885679) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.888 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:28:36.887161) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.924 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/cpu volume: 36050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.955 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/cpu volume: 278770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:36.985 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/cpu volume: 34780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.018 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 41200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.019 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.020 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.020 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.021 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.023 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.025 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.026 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.027 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.028 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.028 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.029 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.030 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:28:37.022901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.033 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:28:37.031831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.067 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.068 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.068 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.101 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.101 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.102 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.133 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.134 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.134 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.163 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.164 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.164 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.166 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.166 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.167 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.168 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes volume: 7172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.169 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.170 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:28:37.167525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.171 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.172 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:28:37.171913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.172 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.173 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes.delta volume: 507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.173 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.175 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.176 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:28:37.176002) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.177 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/memory.usage volume: 48.93359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.177 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.177 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.178 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.178 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes volume: 8364 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.178 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.179 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.179 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:28:37.178295) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:28:37.180187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.180 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:28:37.181760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.953 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.953 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:37.953 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.073 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.074 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.074 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 nova_compute[189387]: 2025-11-26 23:28:38.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:38 compute-0 nova_compute[189387]: 2025-11-26 23:28:38.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.195 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.196 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.196 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.324 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.325 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.325 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.328 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.329 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.330 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:28:38.328225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.330 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.332 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.332 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.332 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.333 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.333 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.334 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.336 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 833217718 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.336 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 118947761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.337 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 102487832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.337 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 933784002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:28:38.332384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.339 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 144704360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.339 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.latency volume: 114761007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.340 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 1305394210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.340 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 123508779 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.340 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 100732301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.341 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.341 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.342 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.340 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:28:38.335898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.344 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.344 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.344 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.345 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.345 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.346 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.346 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.347 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.347 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.348 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.348 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.349 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.349 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.350 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:28:38.344434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.352 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.352 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.353 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.353 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.353 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.353 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:28:38.353210) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.354 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.354 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.354 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.355 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.355 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.355 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.356 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.356 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.356 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.357 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.358 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.358 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.358 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.359 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.359 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.359 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.360 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.360 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.360 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.361 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.361 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.361 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.362 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:28:38.358900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.362 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.363 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.365 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.365 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.366 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.366 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.367 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.368 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.368 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.368 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.369 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:28:38.365600) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.370 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.370 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.371 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.371 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.372 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.372 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.373 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.373 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.373 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.373 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.373 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.373 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:28:38.373527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.374 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.374 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.374 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.376 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.376 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 2706733169 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.377 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 13192002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.377 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.377 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:28:38.376686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.377 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 2747561632 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.378 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 15877212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.378 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.379 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 2831606495 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.379 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 12954358 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.379 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.380 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.380 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.381 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.382 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.383 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.383 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.383 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes.delta volume: 42 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.384 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.384 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:28:38.382920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.386 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.386 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:28:38.386012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.386 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.387 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.387 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.387 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.388 14 DEBUG ceilometer.compute.pollsters [-] 0d344cef-8e34-4a0c-b747-b8f1f12bbe26/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.388 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.388 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.388 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.389 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.389 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.389 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.390 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.390 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.391 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.392 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.393 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:28:38.394 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:28:40 compute-0 nova_compute[189387]: 2025-11-26 23:28:40.032 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:41 compute-0 nova_compute[189387]: 2025-11-26 23:28:41.141 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:44 compute-0 podman[243925]: 2025-11-26 23:28:44.813597564 +0000 UTC m=+0.134358346 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 23:28:45 compute-0 nova_compute[189387]: 2025-11-26 23:28:45.034 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:46 compute-0 nova_compute[189387]: 2025-11-26 23:28:46.143 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:47 compute-0 podman[243955]: 2025-11-26 23:28:47.794625588 +0000 UTC m=+0.080161058 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:28:47 compute-0 podman[243952]: 2025-11-26 23:28:47.808533629 +0000 UTC m=+0.104963162 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, distribution-scope=public, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.openshift.expose-services=, release=1214.1726694543)
Nov 26 23:28:47 compute-0 podman[243953]: 2025-11-26 23:28:47.816332662 +0000 UTC m=+0.105314731 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:28:47 compute-0 podman[243959]: 2025-11-26 23:28:47.829236007 +0000 UTC m=+0.109578882 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public)
Nov 26 23:28:47 compute-0 podman[243954]: 2025-11-26 23:28:47.841919547 +0000 UTC m=+0.121749299 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 23:28:50 compute-0 nova_compute[189387]: 2025-11-26 23:28:50.037 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:51 compute-0 nova_compute[189387]: 2025-11-26 23:28:51.146 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:55 compute-0 nova_compute[189387]: 2025-11-26 23:28:55.040 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:56 compute-0 nova_compute[189387]: 2025-11-26 23:28:56.149 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:28:57 compute-0 podman[244049]: 2025-11-26 23:28:57.804519662 +0000 UTC m=+0.081478971 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 26 23:28:59 compute-0 podman[203621]: time="2025-11-26T23:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:28:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:28:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 26 23:29:00 compute-0 nova_compute[189387]: 2025-11-26 23:29:00.043 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:01 compute-0 nova_compute[189387]: 2025-11-26 23:29:01.151 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:01 compute-0 openstack_network_exporter[205787]: ERROR   23:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:29:01 compute-0 openstack_network_exporter[205787]: ERROR   23:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:29:01 compute-0 openstack_network_exporter[205787]: ERROR   23:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:29:01 compute-0 openstack_network_exporter[205787]: ERROR   23:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:29:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:29:01 compute-0 openstack_network_exporter[205787]: ERROR   23:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:29:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:29:02 compute-0 podman[244068]: 2025-11-26 23:29:02.795438249 +0000 UTC m=+0.088896333 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:29:05 compute-0 nova_compute[189387]: 2025-11-26 23:29:05.046 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:06 compute-0 nova_compute[189387]: 2025-11-26 23:29:06.153 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:07 compute-0 podman[244092]: 2025-11-26 23:29:07.834599959 +0000 UTC m=+0.117286383 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 26 23:29:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:09.634 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:09.634 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:09.635 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:10 compute-0 nova_compute[189387]: 2025-11-26 23:29:10.049 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:11 compute-0 nova_compute[189387]: 2025-11-26 23:29:11.156 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:15 compute-0 nova_compute[189387]: 2025-11-26 23:29:15.053 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:15 compute-0 podman[244113]: 2025-11-26 23:29:15.874263617 +0000 UTC m=+0.163304480 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:29:16 compute-0 nova_compute[189387]: 2025-11-26 23:29:16.159 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:18 compute-0 podman[244146]: 2025-11-26 23:29:18.809494618 +0000 UTC m=+0.083345690 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 26 23:29:18 compute-0 podman[244140]: 2025-11-26 23:29:18.820501285 +0000 UTC m=+0.106050171 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, name=ubi9, release-0.7.12=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=edpm, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Nov 26 23:29:18 compute-0 podman[244141]: 2025-11-26 23:29:18.833296488 +0000 UTC m=+0.103617618 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:29:18 compute-0 podman[244148]: 2025-11-26 23:29:18.837306632 +0000 UTC m=+0.087454067 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=)
Nov 26 23:29:18 compute-0 podman[244142]: 2025-11-26 23:29:18.83992547 +0000 UTC m=+0.104433009 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 23:29:20 compute-0 nova_compute[189387]: 2025-11-26 23:29:20.056 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:21 compute-0 nova_compute[189387]: 2025-11-26 23:29:21.163 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:24 compute-0 nova_compute[189387]: 2025-11-26 23:29:24.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:24 compute-0 nova_compute[189387]: 2025-11-26 23:29:24.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:29:24 compute-0 nova_compute[189387]: 2025-11-26 23:29:24.208 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:29:25 compute-0 nova_compute[189387]: 2025-11-26 23:29:25.059 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:26 compute-0 nova_compute[189387]: 2025-11-26 23:29:26.166 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:28 compute-0 podman[244238]: 2025-11-26 23:29:28.769729302 +0000 UTC m=+0.065266249 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd)
Nov 26 23:29:29 compute-0 nova_compute[189387]: 2025-11-26 23:29:29.208 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:29 compute-0 nova_compute[189387]: 2025-11-26 23:29:29.209 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:29:29 compute-0 nova_compute[189387]: 2025-11-26 23:29:29.454 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:29:29 compute-0 nova_compute[189387]: 2025-11-26 23:29:29.455 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:29:29 compute-0 nova_compute[189387]: 2025-11-26 23:29:29.455 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:29:29 compute-0 podman[203621]: time="2025-11-26T23:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:29:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:29:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:29:30 compute-0 nova_compute[189387]: 2025-11-26 23:29:30.063 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:30 compute-0 nova_compute[189387]: 2025-11-26 23:29:30.864 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [{"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:29:30 compute-0 nova_compute[189387]: 2025-11-26 23:29:30.883 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:29:30 compute-0 nova_compute[189387]: 2025-11-26 23:29:30.884 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.169 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:31 compute-0 openstack_network_exporter[205787]: ERROR   23:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:29:31 compute-0 openstack_network_exporter[205787]: ERROR   23:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:29:31 compute-0 openstack_network_exporter[205787]: ERROR   23:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:29:31 compute-0 openstack_network_exporter[205787]: ERROR   23:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:29:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:29:31 compute-0 openstack_network_exporter[205787]: ERROR   23:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:29:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.642 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.644 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.644 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.645 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.646 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.649 189391 INFO nova.compute.manager [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Terminating instance#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.651 189391 DEBUG nova.compute.manager [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:29:31 compute-0 kernel: tapfaf484ac-09 (unregistering): left promiscuous mode
Nov 26 23:29:31 compute-0 NetworkManager[56227]: <info>  [1764199771.7169] device (tapfaf484ac-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:29:31 compute-0 ovn_controller[97697]: 2025-11-26T23:29:31Z|00050|binding|INFO|Releasing lport faf484ac-094d-4505-a5ff-b8f5b82ac0cf from this chassis (sb_readonly=0)
Nov 26 23:29:31 compute-0 ovn_controller[97697]: 2025-11-26T23:29:31Z|00051|binding|INFO|Setting lport faf484ac-094d-4505-a5ff-b8f5b82ac0cf down in Southbound
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.725 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:31 compute-0 ovn_controller[97697]: 2025-11-26T23:29:31Z|00052|binding|INFO|Removing iface tapfaf484ac-09 ovn-installed in OVS
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.733 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:64:1d 192.168.0.173'], port_security=['fa:16:3e:22:64:1d 192.168.0.173'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nvijrfhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-port-a64xkohxh7fv', 'neutron:cidrs': '192.168.0.173/24', 'neutron:device_id': '0d344cef-8e34-4a0c-b747-b8f1f12bbe26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nvijrfhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-port-a64xkohxh7fv', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.185', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=faf484ac-094d-4505-a5ff-b8f5b82ac0cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.735 106595 INFO neutron.agent.ovn.metadata.agent [-] Port faf484ac-094d-4505-a5ff-b8f5b82ac0cf in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 unbound from our chassis#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.737 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.740 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.765 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4d5de5cd-3d78-40a6-8c70-81ed4d342492]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:29:31 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 26 23:29:31 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 5min 50.705s CPU time.
Nov 26 23:29:31 compute-0 systemd-machined[155674]: Machine qemu-2-instance-00000002 terminated.
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.808 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb89568-9a65-41c8-9ba9-190fbae0b84a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.814 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[42b53a39-d9a4-4b47-a5af-bd4a399d079b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.855 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[962d44c1-9181-44ab-87cf-efc35422df0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.881 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[0d028b87-4af4-4e4b-b4b7-1801c8bc347d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 28107, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 244270, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.891 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.901 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.906 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[fbad2811-f44b-4e74-a975-2d1a7c2d516b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383460, 'tstamp': 383460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244274, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383463, 'tstamp': 383463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 244274, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.908 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.910 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.921 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.921 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c31f2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.922 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.922 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16c31f2c-50, col_values=(('external_ids', {'iface-id': 'fcca7a28-5262-4637-8ef9-d543dee768b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:29:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:31.922 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.967 189391 INFO nova.virt.libvirt.driver [-] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Instance destroyed successfully.#033[00m
Nov 26 23:29:31 compute-0 nova_compute[189387]: 2025-11-26 23:29:31.969 189391 DEBUG nova.objects.instance [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'resources' on Instance uuid 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.014 189391 DEBUG nova.virt.libvirt.vif [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:20:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-4an7qdyax5ej-sxfbw5pnzmrv-vnf-xsxu7o2rmtsp',id=2,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:20:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-9dg0j52v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:20:28Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzY5MDUwODQ3NjMxNjk0NTU2MD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 26 23:29:32 compute-0 nova_compute[189387]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzY5MDUwODQ3NjMxNjk0NTU2MD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTM2OTA1MDg0NzYzMTY5NDU1NjA9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zNjkwNTA4NDc2MzE2OTQ1NTYwPT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=0d344cef-8e34-4a0c-b747-b8f1f12bbe26,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.015 189391 DEBUG nova.network.os_vif_util [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "address": "fa:16:3e:22:64:1d", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.173", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaf484ac-09", "ovs_interfaceid": "faf484ac-094d-4505-a5ff-b8f5b82ac0cf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.016 189391 DEBUG nova.network.os_vif_util [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:64:1d,bridge_name='br-int',has_traffic_filtering=True,id=faf484ac-094d-4505-a5ff-b8f5b82ac0cf,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfaf484ac-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.017 189391 DEBUG os_vif [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:64:1d,bridge_name='br-int',has_traffic_filtering=True,id=faf484ac-094d-4505-a5ff-b8f5b82ac0cf,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfaf484ac-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.020 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.021 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaf484ac-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.024 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.025 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.028 189391 INFO os_vif [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:64:1d,bridge_name='br-int',has_traffic_filtering=True,id=faf484ac-094d-4505-a5ff-b8f5b82ac0cf,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapfaf484ac-09')#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.029 189391 INFO nova.virt.libvirt.driver [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Deleting instance files /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26_del#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.030 189391 INFO nova.virt.libvirt.driver [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Deletion of /var/lib/nova/instances/0d344cef-8e34-4a0c-b747-b8f1f12bbe26_del complete#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.118 189391 DEBUG nova.virt.libvirt.host [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.119 189391 INFO nova.virt.libvirt.host [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] UEFI support detected#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.124 189391 INFO nova.compute.manager [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.125 189391 DEBUG oslo.service.loopingcall [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.126 189391 DEBUG nova.compute.manager [-] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.127 189391 DEBUG nova.network.neutron [-] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.132 189391 DEBUG nova.compute.manager [req-2a0c1f97-254b-450c-b895-ff6707db9819 req-17440518-5fd6-4f08-bda9-8b4417f0b898 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received event network-vif-unplugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.132 189391 DEBUG oslo_concurrency.lockutils [req-2a0c1f97-254b-450c-b895-ff6707db9819 req-17440518-5fd6-4f08-bda9-8b4417f0b898 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.133 189391 DEBUG oslo_concurrency.lockutils [req-2a0c1f97-254b-450c-b895-ff6707db9819 req-17440518-5fd6-4f08-bda9-8b4417f0b898 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.133 189391 DEBUG oslo_concurrency.lockutils [req-2a0c1f97-254b-450c-b895-ff6707db9819 req-17440518-5fd6-4f08-bda9-8b4417f0b898 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.134 189391 DEBUG nova.compute.manager [req-2a0c1f97-254b-450c-b895-ff6707db9819 req-17440518-5fd6-4f08-bda9-8b4417f0b898 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] No waiting events found dispatching network-vif-unplugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:29:32 compute-0 nova_compute[189387]: 2025-11-26 23:29:32.134 189391 DEBUG nova.compute.manager [req-2a0c1f97-254b-450c-b895-ff6707db9819 req-17440518-5fd6-4f08-bda9-8b4417f0b898 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received event network-vif-unplugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:29:32 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:29:32.014 189391 DEBUG nova.virt.libvirt.vif [None req-29d455bd-3e [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.160 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.161 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.161 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.162 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.327 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 podman[244294]: 2025-11-26 23:29:33.381011583 +0000 UTC m=+0.122494249 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.418 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.420 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.498 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.500 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.599 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.602 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.662 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.670 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.735 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.737 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.801 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.802 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.903 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.904 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.989 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:33 compute-0 nova_compute[189387]: 2025-11-26 23:29:33.997 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.056 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.058 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:34 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:34.072 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:29:34 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:34.074 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.076 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.119 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.121 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.185 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.187 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.248 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.270 189391 DEBUG nova.compute.manager [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received event network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.271 189391 DEBUG oslo_concurrency.lockutils [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.272 189391 DEBUG oslo_concurrency.lockutils [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.272 189391 DEBUG oslo_concurrency.lockutils [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.273 189391 DEBUG nova.compute.manager [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] No waiting events found dispatching network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.273 189391 WARNING nova.compute.manager [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received unexpected event network-vif-plugged-faf484ac-094d-4505-a5ff-b8f5b82ac0cf for instance with vm_state active and task_state deleting.#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.273 189391 DEBUG nova.compute.manager [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Received event network-changed-faf484ac-094d-4505-a5ff-b8f5b82ac0cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.274 189391 DEBUG nova.compute.manager [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Refreshing instance network info cache due to event network-changed-faf484ac-094d-4505-a5ff-b8f5b82ac0cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.274 189391 DEBUG oslo_concurrency.lockutils [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.274 189391 DEBUG oslo_concurrency.lockutils [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.275 189391 DEBUG nova.network.neutron [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Refreshing network info cache for port faf484ac-094d-4505-a5ff-b8f5b82ac0cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.482 189391 INFO nova.network.neutron [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Port faf484ac-094d-4505-a5ff-b8f5b82ac0cf from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.483 189391 DEBUG nova.network.neutron [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.502 189391 DEBUG oslo_concurrency.lockutils [req-d8043a1d-87ee-471a-936a-9aacc99a2328 req-056c7087-a6a6-40de-a07d-ff0b5942f3bd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-0d344cef-8e34-4a0c-b747-b8f1f12bbe26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.624 189391 DEBUG nova.network.neutron [-] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.646 189391 INFO nova.compute.manager [-] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Took 2.52 seconds to deallocate network for instance.#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.695 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.695 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.778 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.779 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4769MB free_disk=72.33932495117188GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.779 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.831 189391 DEBUG nova.scheduler.client.report [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.913 189391 DEBUG nova.scheduler.client.report [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:29:34 compute-0 nova_compute[189387]: 2025-11-26 23:29:34.914 189391 DEBUG nova.compute.provider_tree [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.066 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.158 189391 DEBUG nova.scheduler.client.report [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.179 189391 DEBUG nova.scheduler.client.report [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.287 189391 DEBUG nova.compute.provider_tree [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.301 189391 DEBUG nova.scheduler.client.report [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.320 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.326 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.359 189391 INFO nova.scheduler.client.report [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Deleted allocations for instance 0d344cef-8e34-4a0c-b747-b8f1f12bbe26#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.429 189391 DEBUG oslo_concurrency.lockutils [None req-29d455bd-3e92-4bb4-85ea-d59889170c07 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "0d344cef-8e34-4a0c-b747-b8f1f12bbe26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.431 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.432 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.432 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.433 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.433 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.521 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.534 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.553 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:29:35 compute-0 nova_compute[189387]: 2025-11-26 23:29:35.553 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:37 compute-0 nova_compute[189387]: 2025-11-26 23:29:37.024 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:37 compute-0 nova_compute[189387]: 2025-11-26 23:29:37.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:37 compute-0 nova_compute[189387]: 2025-11-26 23:29:37.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:37 compute-0 nova_compute[189387]: 2025-11-26 23:29:37.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:38 compute-0 nova_compute[189387]: 2025-11-26 23:29:38.140 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:39 compute-0 podman[244354]: 2025-11-26 23:29:39.15704092 +0000 UTC m=+0.148693130 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4)
Nov 26 23:29:39 compute-0 nova_compute[189387]: 2025-11-26 23:29:39.933 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.009 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Triggering sync for uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.009 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Triggering sync for uuid 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.010 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Triggering sync for uuid f0ac9c29-04ba-4737-8af6-8fc91e451e8c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.010 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.010 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.011 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.012 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.013 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.014 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.046 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.051 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.069 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.095 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:29:40 compute-0 nova_compute[189387]: 2025-11-26 23:29:40.204 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:42 compute-0 nova_compute[189387]: 2025-11-26 23:29:42.027 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:42 compute-0 nova_compute[189387]: 2025-11-26 23:29:42.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:44 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:29:44.076 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:29:45 compute-0 nova_compute[189387]: 2025-11-26 23:29:45.073 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:46 compute-0 nova_compute[189387]: 2025-11-26 23:29:46.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:29:46 compute-0 nova_compute[189387]: 2025-11-26 23:29:46.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:29:46 compute-0 podman[244374]: 2025-11-26 23:29:46.895108654 +0000 UTC m=+0.181940235 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 26 23:29:46 compute-0 nova_compute[189387]: 2025-11-26 23:29:46.964 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764199771.962603, 0d344cef-8e34-4a0c-b747-b8f1f12bbe26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:29:46 compute-0 nova_compute[189387]: 2025-11-26 23:29:46.964 189391 INFO nova.compute.manager [-] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:29:46 compute-0 nova_compute[189387]: 2025-11-26 23:29:46.983 189391 DEBUG nova.compute.manager [None req-04569b4c-005e-44ce-830e-f1e16664a8a4 - - - - - -] [instance: 0d344cef-8e34-4a0c-b747-b8f1f12bbe26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:29:47 compute-0 nova_compute[189387]: 2025-11-26 23:29:47.030 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:49 compute-0 podman[244401]: 2025-11-26 23:29:49.851011893 +0000 UTC m=+0.103690539 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:29:49 compute-0 podman[244402]: 2025-11-26 23:29:49.86049865 +0000 UTC m=+0.101179163 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 23:29:49 compute-0 podman[244400]: 2025-11-26 23:29:49.86662948 +0000 UTC m=+0.137486299 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 23:29:49 compute-0 podman[244408]: 2025-11-26 23:29:49.871201448 +0000 UTC m=+0.108675228 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:29:49 compute-0 podman[244410]: 2025-11-26 23:29:49.891154488 +0000 UTC m=+0.120989220 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 23:29:50 compute-0 nova_compute[189387]: 2025-11-26 23:29:50.074 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:52 compute-0 nova_compute[189387]: 2025-11-26 23:29:52.032 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:55 compute-0 nova_compute[189387]: 2025-11-26 23:29:55.078 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:57 compute-0 nova_compute[189387]: 2025-11-26 23:29:57.035 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:29:59 compute-0 podman[203621]: time="2025-11-26T23:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:29:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:29:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 26 23:29:59 compute-0 podman[244495]: 2025-11-26 23:29:59.844327176 +0000 UTC m=+0.130317822 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 23:30:00 compute-0 nova_compute[189387]: 2025-11-26 23:30:00.081 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:01 compute-0 openstack_network_exporter[205787]: ERROR   23:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:30:01 compute-0 openstack_network_exporter[205787]: ERROR   23:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:30:01 compute-0 openstack_network_exporter[205787]: ERROR   23:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:30:01 compute-0 openstack_network_exporter[205787]: ERROR   23:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:30:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:30:01 compute-0 openstack_network_exporter[205787]: ERROR   23:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:30:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:30:02 compute-0 nova_compute[189387]: 2025-11-26 23:30:02.037 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:03 compute-0 podman[244515]: 2025-11-26 23:30:03.854914659 +0000 UTC m=+0.139103031 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:30:05 compute-0 nova_compute[189387]: 2025-11-26 23:30:05.085 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:06 compute-0 ovn_controller[97697]: 2025-11-26T23:30:06Z|00053|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 26 23:30:07 compute-0 nova_compute[189387]: 2025-11-26 23:30:07.040 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:30:09.635 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:30:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:30:09.635 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:30:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:30:09.636 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:30:09 compute-0 podman[244539]: 2025-11-26 23:30:09.847790389 +0000 UTC m=+0.122171140 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4)
Nov 26 23:30:10 compute-0 nova_compute[189387]: 2025-11-26 23:30:10.089 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:12 compute-0 nova_compute[189387]: 2025-11-26 23:30:12.042 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:15 compute-0 nova_compute[189387]: 2025-11-26 23:30:15.092 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:17 compute-0 nova_compute[189387]: 2025-11-26 23:30:17.045 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:17 compute-0 podman[244558]: 2025-11-26 23:30:17.861035326 +0000 UTC m=+0.149441960 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 26 23:30:20 compute-0 nova_compute[189387]: 2025-11-26 23:30:20.096 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:20 compute-0 podman[244587]: 2025-11-26 23:30:20.82549698 +0000 UTC m=+0.093126704 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:30:20 compute-0 podman[244585]: 2025-11-26 23:30:20.834197647 +0000 UTC m=+0.104370997 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1214.1726694543, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 23:30:20 compute-0 podman[244586]: 2025-11-26 23:30:20.836160868 +0000 UTC m=+0.108433102 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:30:20 compute-0 podman[244605]: 2025-11-26 23:30:20.861572159 +0000 UTC m=+0.111407449 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, vendor=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container)
Nov 26 23:30:20 compute-0 podman[244588]: 2025-11-26 23:30:20.863218012 +0000 UTC m=+0.112306193 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:30:22 compute-0 nova_compute[189387]: 2025-11-26 23:30:22.047 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:25 compute-0 nova_compute[189387]: 2025-11-26 23:30:25.099 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:27 compute-0 nova_compute[189387]: 2025-11-26 23:30:27.051 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:29 compute-0 podman[203621]: time="2025-11-26T23:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:30:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:30:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Nov 26 23:30:30 compute-0 nova_compute[189387]: 2025-11-26 23:30:30.103 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:30 compute-0 podman[244678]: 2025-11-26 23:30:30.81295507 +0000 UTC m=+0.101527703 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:30:31 compute-0 nova_compute[189387]: 2025-11-26 23:30:31.143 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:31 compute-0 nova_compute[189387]: 2025-11-26 23:30:31.145 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:30:31 compute-0 openstack_network_exporter[205787]: ERROR   23:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:30:31 compute-0 openstack_network_exporter[205787]: ERROR   23:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:30:31 compute-0 openstack_network_exporter[205787]: ERROR   23:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:30:31 compute-0 openstack_network_exporter[205787]: ERROR   23:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:30:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:30:31 compute-0 openstack_network_exporter[205787]: ERROR   23:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:30:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:30:32 compute-0 nova_compute[189387]: 2025-11-26 23:30:32.054 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:32 compute-0 nova_compute[189387]: 2025-11-26 23:30:32.240 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:30:32 compute-0 nova_compute[189387]: 2025-11-26 23:30:32.241 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:30:32 compute-0 nova_compute[189387]: 2025-11-26 23:30:32.241 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:30:34 compute-0 podman[244698]: 2025-11-26 23:30:34.828005439 +0000 UTC m=+0.114357577 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.036 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updating instance_info_cache with network_info: [{"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.060 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.061 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.061 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.062 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.062 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.063 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.063 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.095 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.096 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.096 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.098 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.107 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.216 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.277 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.278 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.338 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.339 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.434 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.435 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.520 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.527 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.613 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.614 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.707 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.709 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.786 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.787 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.847 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.861 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.922 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.924 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.986 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:35 compute-0 nova_compute[189387]: 2025-11-26 23:30:35.988 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.047 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.049 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.129 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.652 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.654 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4778MB free_disk=72.33930206298828GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.654 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.655 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.782 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.783 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.783 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.784 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.784 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.844 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.844 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.844 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.845 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.853 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd', 'name': 'vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.857 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c', 'name': 'vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.861 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.863 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:30:36.861970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:30:36.863657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.868 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.871 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.874 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.875 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.875 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.875 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.876 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.876 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.877 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:30:36.875982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.877 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.878 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.878 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.878 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:30:36.877587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:30:36.879202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.881 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.896 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.899 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:30:36 compute-0 nova_compute[189387]: 2025-11-26 23:30:36.900 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.912 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/cpu volume: 37840000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.940 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/cpu volume: 36560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.966 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 43050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.967 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.967 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.967 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.968 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.968 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.968 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:30:36.967981) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.968 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.969 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.969 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.969 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:30:36.969996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.994 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.995 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:36.995 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.028 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.029 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.029 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 nova_compute[189387]: 2025-11-26 23:30:37.057 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.061 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.062 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.062 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.062 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.062 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.063 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.063 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.063 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.063 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:30:37.063066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.064 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:30:37.064495) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.065 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.065 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.066 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.066 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.066 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.066 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.067 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.068 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:30:37.066278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.068 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.068 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:30:37.067635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.069 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.069 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.069 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.069 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.069 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.069 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:30:37.069563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:30:37.070934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.153 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.154 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.154 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.238 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.239 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.239 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.323 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.324 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.324 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.325 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.326 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.326 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.326 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.327 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:30:37.326149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.328 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.328 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.328 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:30:37.327989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.329 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 833217718 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.330 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 118947761 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.330 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.latency volume: 102487832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.330 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 1305394210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.330 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 123508779 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.331 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 100732301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.331 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.331 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:30:37.329767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.332 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.332 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.332 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.333 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.333 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.333 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.334 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.334 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.334 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:30:37.333177) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.334 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.335 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.335 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.336 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.336 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.336 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.336 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.337 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.337 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:30:37.336636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.337 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.338 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.338 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.338 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.338 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.339 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.339 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.339 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.340 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.340 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.340 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.340 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.341 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.341 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.341 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.342 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.342 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:30:37.340065) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.343 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.344 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.344 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.344 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:30:37.343711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.345 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.345 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.345 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.345 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.346 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.346 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.347 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.347 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.347 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:30:37.347219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.348 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.349 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 2706733169 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.349 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 13192002 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.349 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.349 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 2831606495 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.350 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 12954358 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.350 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.350 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:30:37.348897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.350 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.351 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.351 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.352 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.352 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.352 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.352 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.352 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.353 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.353 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.353 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.354 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.354 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.354 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.354 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.354 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:30:37.352587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.354 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.355 14 DEBUG ceilometer.compute.pollsters [-] 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:30:37.354501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.355 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.355 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.356 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.356 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.356 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.357 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.357 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.358 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.359 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.360 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.361 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.362 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.363 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.364 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.365 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:30:37.366 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:30:39 compute-0 nova_compute[189387]: 2025-11-26 23:30:39.963 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:39 compute-0 nova_compute[189387]: 2025-11-26 23:30:39.964 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:39 compute-0 nova_compute[189387]: 2025-11-26 23:30:39.965 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:40 compute-0 nova_compute[189387]: 2025-11-26 23:30:40.110 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:40 compute-0 podman[244760]: 2025-11-26 23:30:40.847157736 +0000 UTC m=+0.128735400 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 23:30:41 compute-0 nova_compute[189387]: 2025-11-26 23:30:41.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:30:42 compute-0 nova_compute[189387]: 2025-11-26 23:30:42.059 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:45 compute-0 nova_compute[189387]: 2025-11-26 23:30:45.113 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:47 compute-0 nova_compute[189387]: 2025-11-26 23:30:47.061 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:48 compute-0 podman[244779]: 2025-11-26 23:30:48.822374665 +0000 UTC m=+0.120353216 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 26 23:30:50 compute-0 nova_compute[189387]: 2025-11-26 23:30:50.114 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:51 compute-0 podman[244806]: 2025-11-26 23:30:51.84557148 +0000 UTC m=+0.116035271 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, container_name=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 26 23:30:51 compute-0 podman[244809]: 2025-11-26 23:30:51.858196257 +0000 UTC m=+0.113582515 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125)
Nov 26 23:30:51 compute-0 podman[244810]: 2025-11-26 23:30:51.861187407 +0000 UTC m=+0.115226539 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350)
Nov 26 23:30:51 compute-0 podman[244808]: 2025-11-26 23:30:51.8609289 +0000 UTC m=+0.130693542 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 23:30:51 compute-0 podman[244807]: 2025-11-26 23:30:51.867115326 +0000 UTC m=+0.132402019 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:30:52 compute-0 nova_compute[189387]: 2025-11-26 23:30:52.064 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:55 compute-0 nova_compute[189387]: 2025-11-26 23:30:55.117 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:57 compute-0 nova_compute[189387]: 2025-11-26 23:30:57.067 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:30:59 compute-0 podman[203621]: time="2025-11-26T23:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:30:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:30:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 26 23:31:00 compute-0 nova_compute[189387]: 2025-11-26 23:31:00.120 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:01 compute-0 openstack_network_exporter[205787]: ERROR   23:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:31:01 compute-0 openstack_network_exporter[205787]: ERROR   23:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:31:01 compute-0 openstack_network_exporter[205787]: ERROR   23:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:31:01 compute-0 openstack_network_exporter[205787]: ERROR   23:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:31:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:31:01 compute-0 openstack_network_exporter[205787]: ERROR   23:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:31:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:31:01 compute-0 podman[244898]: 2025-11-26 23:31:01.841146022 +0000 UTC m=+0.124885898 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Nov 26 23:31:02 compute-0 nova_compute[189387]: 2025-11-26 23:31:02.070 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:05 compute-0 nova_compute[189387]: 2025-11-26 23:31:05.122 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:05 compute-0 podman[244919]: 2025-11-26 23:31:05.830625043 +0000 UTC m=+0.099629754 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:31:07 compute-0 nova_compute[189387]: 2025-11-26 23:31:07.075 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:09.636 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:09.637 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:09.638 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:10 compute-0 nova_compute[189387]: 2025-11-26 23:31:10.125 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:11 compute-0 podman[244943]: 2025-11-26 23:31:11.832712183 +0000 UTC m=+0.111565661 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:31:12 compute-0 nova_compute[189387]: 2025-11-26 23:31:12.078 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:15 compute-0 nova_compute[189387]: 2025-11-26 23:31:15.128 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:17 compute-0 nova_compute[189387]: 2025-11-26 23:31:17.081 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:19 compute-0 podman[244963]: 2025-11-26 23:31:19.940398781 +0000 UTC m=+0.214200274 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:31:20 compute-0 nova_compute[189387]: 2025-11-26 23:31:20.131 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:22 compute-0 nova_compute[189387]: 2025-11-26 23:31:22.084 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:22 compute-0 podman[244990]: 2025-11-26 23:31:22.831253338 +0000 UTC m=+0.097577459 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:31:22 compute-0 podman[244992]: 2025-11-26 23:31:22.856896973 +0000 UTC m=+0.120155012 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:31:22 compute-0 podman[244998]: 2025-11-26 23:31:22.862487352 +0000 UTC m=+0.097638750 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Nov 26 23:31:22 compute-0 podman[244991]: 2025-11-26 23:31:22.872354896 +0000 UTC m=+0.129728348 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 23:31:22 compute-0 podman[244989]: 2025-11-26 23:31:22.886956485 +0000 UTC m=+0.160804996 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Nov 26 23:31:25 compute-0 nova_compute[189387]: 2025-11-26 23:31:25.134 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:27 compute-0 nova_compute[189387]: 2025-11-26 23:31:27.088 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:29 compute-0 podman[203621]: time="2025-11-26T23:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:31:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:31:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Nov 26 23:31:30 compute-0 nova_compute[189387]: 2025-11-26 23:31:30.137 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:31 compute-0 openstack_network_exporter[205787]: ERROR   23:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:31:31 compute-0 openstack_network_exporter[205787]: ERROR   23:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:31:31 compute-0 openstack_network_exporter[205787]: ERROR   23:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:31:31 compute-0 openstack_network_exporter[205787]: ERROR   23:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:31:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:31:31 compute-0 openstack_network_exporter[205787]: ERROR   23:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:31:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:31:32 compute-0 nova_compute[189387]: 2025-11-26 23:31:32.090 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:32 compute-0 nova_compute[189387]: 2025-11-26 23:31:32.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:32 compute-0 nova_compute[189387]: 2025-11-26 23:31:32.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:31:32 compute-0 nova_compute[189387]: 2025-11-26 23:31:32.428 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:31:32 compute-0 nova_compute[189387]: 2025-11-26 23:31:32.429 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:31:32 compute-0 nova_compute[189387]: 2025-11-26 23:31:32.430 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:31:32 compute-0 podman[245082]: 2025-11-26 23:31:32.830448017 +0000 UTC m=+0.109062135 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 26 23:31:33 compute-0 nova_compute[189387]: 2025-11-26 23:31:33.256 189391 DEBUG nova.compute.manager [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received event network-changed-c5ede21d-87b7-4215-9363-b5863725bc1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:31:33 compute-0 nova_compute[189387]: 2025-11-26 23:31:33.257 189391 DEBUG nova.compute.manager [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Refreshing instance network info cache due to event network-changed-c5ede21d-87b7-4215-9363-b5863725bc1e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:31:33 compute-0 nova_compute[189387]: 2025-11-26 23:31:33.258 189391 DEBUG oslo_concurrency.lockutils [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:31:33 compute-0 nova_compute[189387]: 2025-11-26 23:31:33.259 189391 DEBUG oslo_concurrency.lockutils [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:31:33 compute-0 nova_compute[189387]: 2025-11-26 23:31:33.259 189391 DEBUG nova.network.neutron [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Refreshing network info cache for port c5ede21d-87b7-4215-9363-b5863725bc1e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:31:34 compute-0 nova_compute[189387]: 2025-11-26 23:31:34.978 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:34 compute-0 nova_compute[189387]: 2025-11-26 23:31:34.979 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:34 compute-0 nova_compute[189387]: 2025-11-26 23:31:34.979 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:34 compute-0 nova_compute[189387]: 2025-11-26 23:31:34.979 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:34 compute-0 nova_compute[189387]: 2025-11-26 23:31:34.980 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:34 compute-0 nova_compute[189387]: 2025-11-26 23:31:34.982 189391 INFO nova.compute.manager [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Terminating instance#033[00m
Nov 26 23:31:34 compute-0 nova_compute[189387]: 2025-11-26 23:31:34.984 189391 DEBUG nova.compute.manager [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:31:35 compute-0 kernel: tapc5ede21d-87 (unregistering): left promiscuous mode
Nov 26 23:31:35 compute-0 NetworkManager[56227]: <info>  [1764199895.0432] device (tapc5ede21d-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:31:35 compute-0 ovn_controller[97697]: 2025-11-26T23:31:35Z|00054|binding|INFO|Releasing lport c5ede21d-87b7-4215-9363-b5863725bc1e from this chassis (sb_readonly=0)
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.057 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 ovn_controller[97697]: 2025-11-26T23:31:35Z|00055|binding|INFO|Setting lport c5ede21d-87b7-4215-9363-b5863725bc1e down in Southbound
Nov 26 23:31:35 compute-0 ovn_controller[97697]: 2025-11-26T23:31:35Z|00056|binding|INFO|Removing iface tapc5ede21d-87 ovn-installed in OVS
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.063 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.073 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d8:b5:86 192.168.0.214'], port_security=['fa:16:3e:d8:b5:86 192.168.0.214'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nvijrfhdmirp-runjo4u2h7na-he3onrrerp7p-port-ah5ptqkcbqsc', 'neutron:cidrs': '192.168.0.214/24', 'neutron:device_id': '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nvijrfhdmirp-runjo4u2h7na-he3onrrerp7p-port-ah5ptqkcbqsc', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=c5ede21d-87b7-4215-9363-b5863725bc1e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.076 106595 INFO neutron.agent.ovn.metadata.agent [-] Port c5ede21d-87b7-4215-9363-b5863725bc1e in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 unbound from our chassis#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.076 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.080 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.101 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[fd010283-ed72-440c-bd1f-595b26e6dc64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:31:35 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 26 23:31:35 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 42.069s CPU time.
Nov 26 23:31:35 compute-0 systemd-machined[155674]: Machine qemu-3-instance-00000003 terminated.
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.140 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6b69166e-9840-4ca3-a069-49d9a0085136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.142 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.145 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[17f0dd21-ff00-44e1-9ea2-0669ae599e75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.150 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.150 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.185 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[8df5dc89-e829-4405-8e84-67016330fa8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.206 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8a04eb-55e4-43f5-b52b-2d1246ef2e1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 27671, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245115, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.222 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.230 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[8c5db37e-dc8f-42ee-8040-9b0cd2fe1418]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383460, 'tstamp': 383460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245118, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383463, 'tstamp': 383463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245118, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.233 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.234 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.237 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.246 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.246 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c31f2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.247 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.247 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16c31f2c-50, col_values=(('external_ids', {'iface-id': 'fcca7a28-5262-4637-8ef9-d543dee768b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.248 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:31:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:35.250 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.258 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updating instance_info_cache with network_info: [{"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.286 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.286 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.287 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.287 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.287 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.288 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.288 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.307 189391 INFO nova.virt.libvirt.driver [-] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Instance destroyed successfully.#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.307 189391 DEBUG nova.objects.instance [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'resources' on Instance uuid 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.326 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.326 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.326 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.327 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.328 189391 DEBUG nova.virt.libvirt.vif [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-runjo4u2h7na-he3onrrerp7p-vnf-pxixoz6blnnj',id=3,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:23:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-1qhi57sg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:23:48Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NTM5MjgxNDM0NDY2NjM3MTI4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 26 23:31:35 compute-0 nova_compute[189387]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NTM5MjgxNDM0NDY2NjM3MTI4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTUzOTI4MTQzNDQ2NjYzNzEyODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT01MzkyODE0MzQ0NjY2MzcxMjg4PT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.329 189391 DEBUG nova.network.os_vif_util [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.330 189391 DEBUG nova.network.os_vif_util [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d8:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=c5ede21d-87b7-4215-9363-b5863725bc1e,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc5ede21d-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.330 189391 DEBUG os_vif [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=c5ede21d-87b7-4215-9363-b5863725bc1e,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc5ede21d-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.332 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.332 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5ede21d-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.335 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.336 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.337 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.343 189391 DEBUG nova.compute.manager [req-1fee7280-32c0-45f1-9970-4aa103843259 req-3a16be7e-bab0-472a-952c-cf424202c4e0 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received event network-vif-unplugged-c5ede21d-87b7-4215-9363-b5863725bc1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.344 189391 DEBUG oslo_concurrency.lockutils [req-1fee7280-32c0-45f1-9970-4aa103843259 req-3a16be7e-bab0-472a-952c-cf424202c4e0 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.344 189391 DEBUG oslo_concurrency.lockutils [req-1fee7280-32c0-45f1-9970-4aa103843259 req-3a16be7e-bab0-472a-952c-cf424202c4e0 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.344 189391 DEBUG oslo_concurrency.lockutils [req-1fee7280-32c0-45f1-9970-4aa103843259 req-3a16be7e-bab0-472a-952c-cf424202c4e0 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.344 189391 DEBUG nova.compute.manager [req-1fee7280-32c0-45f1-9970-4aa103843259 req-3a16be7e-bab0-472a-952c-cf424202c4e0 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] No waiting events found dispatching network-vif-unplugged-c5ede21d-87b7-4215-9363-b5863725bc1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.344 189391 DEBUG nova.compute.manager [req-1fee7280-32c0-45f1-9970-4aa103843259 req-3a16be7e-bab0-472a-952c-cf424202c4e0 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received event network-vif-unplugged-c5ede21d-87b7-4215-9363-b5863725bc1e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.345 189391 INFO os_vif [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d8:b5:86,bridge_name='br-int',has_traffic_filtering=True,id=c5ede21d-87b7-4215-9363-b5863725bc1e,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapc5ede21d-87')#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.345 189391 INFO nova.virt.libvirt.driver [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Deleting instance files /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd_del#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.346 189391 INFO nova.virt.libvirt.driver [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Deletion of /var/lib/nova/instances/2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd_del complete#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.417 189391 INFO nova.compute.manager [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.418 189391 DEBUG oslo.service.loopingcall [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.418 189391 DEBUG nova.compute.manager [-] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.418 189391 DEBUG nova.network.neutron [-] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.432 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Error from libvirt while getting description of instance-00000003: [Error Code 42] Domain not found: no domain with matching uuid '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd' (instance-00000003): libvirt.libvirtError: Domain not found: no domain with matching uuid '2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd' (instance-00000003)#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.440 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.506 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.507 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:35 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:31:35.328 189391 DEBUG nova.virt.libvirt.vif [None req-1a255358-27 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.577 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.578 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.653 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.654 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.712 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.727 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.791 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.794 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.893 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.894 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.981 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:35 compute-0 nova_compute[189387]: 2025-11-26 23:31:35.984 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.048 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.535 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.538 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4949MB free_disk=72.34093856811523GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.538 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.539 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.647 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.647 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.648 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.648 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.648 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.716 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.726 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.744 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:31:36 compute-0 nova_compute[189387]: 2025-11-26 23:31:36.744 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:36 compute-0 podman[245163]: 2025-11-26 23:31:36.854554962 +0000 UTC m=+0.127065595 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.445 189391 DEBUG nova.compute.manager [req-e0eafa06-7acc-437e-a8fa-ce42f8d1bc27 req-41f6997a-0f08-415f-be63-b89e66ff41fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received event network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.445 189391 DEBUG oslo_concurrency.lockutils [req-e0eafa06-7acc-437e-a8fa-ce42f8d1bc27 req-41f6997a-0f08-415f-be63-b89e66ff41fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.446 189391 DEBUG oslo_concurrency.lockutils [req-e0eafa06-7acc-437e-a8fa-ce42f8d1bc27 req-41f6997a-0f08-415f-be63-b89e66ff41fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.447 189391 DEBUG oslo_concurrency.lockutils [req-e0eafa06-7acc-437e-a8fa-ce42f8d1bc27 req-41f6997a-0f08-415f-be63-b89e66ff41fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.448 189391 DEBUG nova.compute.manager [req-e0eafa06-7acc-437e-a8fa-ce42f8d1bc27 req-41f6997a-0f08-415f-be63-b89e66ff41fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] No waiting events found dispatching network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.448 189391 WARNING nova.compute.manager [req-e0eafa06-7acc-437e-a8fa-ce42f8d1bc27 req-41f6997a-0f08-415f-be63-b89e66ff41fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Received unexpected event network-vif-plugged-c5ede21d-87b7-4215-9363-b5863725bc1e for instance with vm_state active and task_state deleting.#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.491 189391 DEBUG nova.network.neutron [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updated VIF entry in instance network info cache for port c5ede21d-87b7-4215-9363-b5863725bc1e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.491 189391 DEBUG nova.network.neutron [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updating instance_info_cache with network_info: [{"id": "c5ede21d-87b7-4215-9363-b5863725bc1e", "address": "fa:16:3e:d8:b5:86", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.214", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc5ede21d-87", "ovs_interfaceid": "c5ede21d-87b7-4215-9363-b5863725bc1e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:31:37 compute-0 nova_compute[189387]: 2025-11-26 23:31:37.515 189391 DEBUG oslo_concurrency.lockutils [req-e9413d2e-21e8-447f-8537-2236ed718872 req-aca846b7-eb5f-4928-bcbf-553a1272f0b4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.087 189391 DEBUG nova.network.neutron [-] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.107 189391 INFO nova.compute.manager [-] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Took 3.69 seconds to deallocate network for instance.#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.183 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.184 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:31:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:31:39.252 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.400 189391 DEBUG nova.compute.provider_tree [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.413 189391 DEBUG nova.scheduler.client.report [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.430 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.460 189391 INFO nova.scheduler.client.report [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Deleted allocations for instance 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.526 189391 DEBUG oslo_concurrency.lockutils [None req-1a255358-271b-441b-b6cc-e5347f3b283c 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.580 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:39 compute-0 nova_compute[189387]: 2025-11-26 23:31:39.581 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:40 compute-0 nova_compute[189387]: 2025-11-26 23:31:40.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:40 compute-0 nova_compute[189387]: 2025-11-26 23:31:40.145 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:40 compute-0 nova_compute[189387]: 2025-11-26 23:31:40.335 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:41 compute-0 nova_compute[189387]: 2025-11-26 23:31:41.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:42 compute-0 podman[245185]: 2025-11-26 23:31:42.82460594 +0000 UTC m=+0.111479878 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:31:45 compute-0 nova_compute[189387]: 2025-11-26 23:31:45.148 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:45 compute-0 nova_compute[189387]: 2025-11-26 23:31:45.337 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:47 compute-0 nova_compute[189387]: 2025-11-26 23:31:47.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:31:50 compute-0 nova_compute[189387]: 2025-11-26 23:31:50.151 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:50 compute-0 nova_compute[189387]: 2025-11-26 23:31:50.302 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764199895.3011982, 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:31:50 compute-0 nova_compute[189387]: 2025-11-26 23:31:50.304 189391 INFO nova.compute.manager [-] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:31:50 compute-0 nova_compute[189387]: 2025-11-26 23:31:50.332 189391 DEBUG nova.compute.manager [None req-54b58366-2064-4c88-a224-5fd617d88604 - - - - - -] [instance: 2a76fe3c-24f1-42c6-bc97-0dbce5ee4bcd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:31:50 compute-0 nova_compute[189387]: 2025-11-26 23:31:50.340 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:50 compute-0 podman[245206]: 2025-11-26 23:31:50.827981942 +0000 UTC m=+0.129940723 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:31:53 compute-0 podman[245232]: 2025-11-26 23:31:53.821498792 +0000 UTC m=+0.114450028 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public)
Nov 26 23:31:53 compute-0 podman[245239]: 2025-11-26 23:31:53.827964475 +0000 UTC m=+0.097961248 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Nov 26 23:31:53 compute-0 podman[245235]: 2025-11-26 23:31:53.842134903 +0000 UTC m=+0.114732196 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Nov 26 23:31:53 compute-0 podman[245233]: 2025-11-26 23:31:53.843005877 +0000 UTC m=+0.131823684 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:31:53 compute-0 podman[245234]: 2025-11-26 23:31:53.857470333 +0000 UTC m=+0.135314036 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:31:55 compute-0 nova_compute[189387]: 2025-11-26 23:31:55.153 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:55 compute-0 nova_compute[189387]: 2025-11-26 23:31:55.342 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:31:59 compute-0 podman[203621]: time="2025-11-26T23:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:31:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:31:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Nov 26 23:32:00 compute-0 nova_compute[189387]: 2025-11-26 23:32:00.156 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:00 compute-0 nova_compute[189387]: 2025-11-26 23:32:00.345 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:01 compute-0 openstack_network_exporter[205787]: ERROR   23:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:32:01 compute-0 openstack_network_exporter[205787]: ERROR   23:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:32:01 compute-0 openstack_network_exporter[205787]: ERROR   23:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:32:01 compute-0 openstack_network_exporter[205787]: ERROR   23:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:32:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:32:01 compute-0 openstack_network_exporter[205787]: ERROR   23:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:32:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:32:03 compute-0 podman[245322]: 2025-11-26 23:32:03.807254096 +0000 UTC m=+0.102119480 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:32:04 compute-0 systemd-logind[819]: New session 30 of user zuul.
Nov 26 23:32:04 compute-0 systemd[1]: Started Session 30 of User zuul.
Nov 26 23:32:05 compute-0 nova_compute[189387]: 2025-11-26 23:32:05.158 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:05 compute-0 nova_compute[189387]: 2025-11-26 23:32:05.347 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:06 compute-0 python3[245521]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:32:07 compute-0 podman[245559]: 2025-11-26 23:32:07.829502833 +0000 UTC m=+0.112215239 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:32:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:32:09.637 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:32:09.638 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:32:09.639 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:10 compute-0 nova_compute[189387]: 2025-11-26 23:32:10.162 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:10 compute-0 nova_compute[189387]: 2025-11-26 23:32:10.349 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:11 compute-0 ovn_controller[97697]: 2025-11-26T23:32:11Z|00057|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Nov 26 23:32:13 compute-0 podman[245582]: 2025-11-26 23:32:13.832251868 +0000 UTC m=+0.120785418 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:32:15 compute-0 nova_compute[189387]: 2025-11-26 23:32:15.166 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:15 compute-0 nova_compute[189387]: 2025-11-26 23:32:15.352 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:20 compute-0 nova_compute[189387]: 2025-11-26 23:32:20.169 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:20 compute-0 nova_compute[189387]: 2025-11-26 23:32:20.355 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:21 compute-0 nova_compute[189387]: 2025-11-26 23:32:21.867 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:21 compute-0 nova_compute[189387]: 2025-11-26 23:32:21.868 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:21 compute-0 nova_compute[189387]: 2025-11-26 23:32:21.896 189391 DEBUG nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:32:21 compute-0 podman[245601]: 2025-11-26 23:32:21.919994416 +0000 UTC m=+0.191946900 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.007 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.009 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.022 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.023 189391 INFO nova.compute.claims [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.221 189391 DEBUG nova.compute.provider_tree [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.242 189391 DEBUG nova.scheduler.client.report [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.272 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.274 189391 DEBUG nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.346 189391 DEBUG nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.359 189391 INFO nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.411 189391 DEBUG nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.495 189391 DEBUG nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.497 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.498 189391 INFO nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Creating image(s)#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.499 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.500 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.501 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.502 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "70621b30123d1851a67a3cfd3d5b49a7a1030e86" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:22 compute-0 nova_compute[189387]: 2025-11-26 23:32:22.503 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "70621b30123d1851a67a3cfd3d5b49a7a1030e86" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:23 compute-0 nova_compute[189387]: 2025-11-26 23:32:23.698 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:23 compute-0 nova_compute[189387]: 2025-11-26 23:32:23.795 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.part --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:23 compute-0 nova_compute[189387]: 2025-11-26 23:32:23.798 189391 DEBUG nova.virt.images [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] 9615d08d-8a5e-4035-96a9-c9e590af081c was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 23:32:23 compute-0 nova_compute[189387]: 2025-11-26 23:32:23.801 189391 DEBUG nova.privsep.utils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 23:32:23 compute-0 nova_compute[189387]: 2025-11-26 23:32:23.802 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.part /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.019 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.part /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.converted" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.027 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.114 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86.converted --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.117 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "70621b30123d1851a67a3cfd3d5b49a7a1030e86" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.146 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.234 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.236 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "70621b30123d1851a67a3cfd3d5b49a7a1030e86" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.237 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "70621b30123d1851a67a3cfd3d5b49a7a1030e86" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.252 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.329 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.331 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86,backing_fmt=raw /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.381 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86,backing_fmt=raw /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk 1073741824" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.382 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "70621b30123d1851a67a3cfd3d5b49a7a1030e86" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.383 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.437 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.438 189391 DEBUG nova.virt.disk.api [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Checking if we can resize image /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.439 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.497 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.499 189391 DEBUG nova.virt.disk.api [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Cannot resize image /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.500 189391 DEBUG nova.objects.instance [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'migration_context' on Instance uuid eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.518 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.520 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.521 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.546 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.622 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.624 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.626 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.651 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.743 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.745 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.797 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.eph0 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.798 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.798 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:24 compute-0 podman[245660]: 2025-11-26 23:32:24.81149135 +0000 UTC m=+0.078482048 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 23:32:24 compute-0 podman[245661]: 2025-11-26 23:32:24.824180639 +0000 UTC m=+0.083557524 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Nov 26 23:32:24 compute-0 podman[245658]: 2025-11-26 23:32:24.831747451 +0000 UTC m=+0.112693662 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, architecture=x86_64, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc.)
Nov 26 23:32:24 compute-0 podman[245659]: 2025-11-26 23:32:24.831834543 +0000 UTC m=+0.117074498 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:32:24 compute-0 podman[245662]: 2025-11-26 23:32:24.852216078 +0000 UTC m=+0.115182928 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, build-date=2025-08-20T13:12:41)
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.878 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.878 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.879 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Ensure instance console log exists: /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.879 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.879 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.880 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.881 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:32:10Z,direct_url=<?>,disk_format='qcow2',id=9615d08d-8a5e-4035-96a9-c9e590af081c,min_disk=0,min_ram=0,name='fvt_testing_image',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:32:15Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '9615d08d-8a5e-4035-96a9-c9e590af081c'}], 'ephemerals': [{'size': 1, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vdb'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.888 189391 WARNING nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.894 189391 DEBUG nova.virt.libvirt.host [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.895 189391 DEBUG nova.virt.libvirt.host [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.899 189391 DEBUG nova.virt.libvirt.host [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.900 189391 DEBUG nova.virt.libvirt.host [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.900 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.900 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:32:17Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='20ec6b7d-6dc3-4091-bf6e-ad76423d378c',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-26T23:32:10Z,direct_url=<?>,disk_format='qcow2',id=9615d08d-8a5e-4035-96a9-c9e590af081c,min_disk=0,min_ram=0,name='fvt_testing_image',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-26T23:32:15Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.901 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.901 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.901 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.901 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.902 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.902 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.902 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.902 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.903 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.903 189391 DEBUG nova.virt.hardware [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.907 189391 DEBUG nova.objects.instance [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'pci_devices' on Instance uuid eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.927 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <uuid>eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74</uuid>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <name>instance-00000005</name>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <memory>524288</memory>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <nova:name>fvt_testing_server</nova:name>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:32:24</nova:creationTime>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <nova:flavor name="fvt_testing_flavor">
Nov 26 23:32:24 compute-0 nova_compute[189387]:        <nova:memory>512</nova:memory>
Nov 26 23:32:24 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:32:24 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:32:24 compute-0 nova_compute[189387]:        <nova:ephemeral>1</nova:ephemeral>
Nov 26 23:32:24 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:32:24 compute-0 nova_compute[189387]:        <nova:user uuid="6ad061874c77438db2e6d8efb2b1400b">admin</nova:user>
Nov 26 23:32:24 compute-0 nova_compute[189387]:        <nova:project uuid="dd2e793599b6418881c391df7f71e0c6">admin</nova:project>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="9615d08d-8a5e-4035-96a9-c9e590af081c"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <nova:ports/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <system>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <entry name="serial">eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74</entry>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <entry name="uuid">eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74</entry>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </system>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <os>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  </os>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <features>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  </features>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.eph0"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <target dev="vdb" bus="virtio"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.config"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/console.log" append="off"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <video>
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </video>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:32:24 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:32:24 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:32:24 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:32:24 compute-0 nova_compute[189387]: </domain>
Nov 26 23:32:24 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.981 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.982 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.982 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:32:24 compute-0 nova_compute[189387]: 2025-11-26 23:32:24.982 189391 INFO nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Using config drive#033[00m
Nov 26 23:32:25 compute-0 nova_compute[189387]: 2025-11-26 23:32:25.172 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:25 compute-0 nova_compute[189387]: 2025-11-26 23:32:25.348 189391 INFO nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Creating config drive at /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.config#033[00m
Nov 26 23:32:25 compute-0 nova_compute[189387]: 2025-11-26 23:32:25.353 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmsp62cii execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:25 compute-0 nova_compute[189387]: 2025-11-26 23:32:25.379 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:25 compute-0 nova_compute[189387]: 2025-11-26 23:32:25.498 189391 DEBUG oslo_concurrency.processutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmsp62cii" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:25 compute-0 systemd-machined[155674]: New machine qemu-5-instance-00000005.
Nov 26 23:32:25 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.019 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199946.0189497, eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.020 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.024 189391 DEBUG nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.024 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.032 189391 INFO nova.virt.libvirt.driver [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Instance spawned successfully.#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.033 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.060 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.075 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.084 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.084 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.085 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.085 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.086 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.087 189391 DEBUG nova.virt.libvirt.driver [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.122 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.123 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764199946.025462, eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.123 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] VM Started (Lifecycle Event)#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.161 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.165 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.171 189391 INFO nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Took 3.68 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.171 189391 DEBUG nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.206 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.266 189391 INFO nova.compute.manager [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Took 4.31 seconds to build instance.#033[00m
Nov 26 23:32:26 compute-0 nova_compute[189387]: 2025-11-26 23:32:26.291 189391 DEBUG oslo_concurrency.lockutils [None req-0318ff14-b232-4d73-8e77-dafc556277c9 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:27 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 23:32:27 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 23:32:29 compute-0 podman[203621]: time="2025-11-26T23:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:32:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:32:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 26 23:32:30 compute-0 nova_compute[189387]: 2025-11-26 23:32:30.176 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:30 compute-0 nova_compute[189387]: 2025-11-26 23:32:30.382 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:31 compute-0 openstack_network_exporter[205787]: ERROR   23:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:32:31 compute-0 openstack_network_exporter[205787]: ERROR   23:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:32:31 compute-0 openstack_network_exporter[205787]: ERROR   23:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:32:31 compute-0 openstack_network_exporter[205787]: ERROR   23:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:32:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:32:31 compute-0 openstack_network_exporter[205787]: ERROR   23:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:32:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:32:33 compute-0 nova_compute[189387]: 2025-11-26 23:32:33.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:33 compute-0 nova_compute[189387]: 2025-11-26 23:32:33.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:32:33 compute-0 nova_compute[189387]: 2025-11-26 23:32:33.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:32:33 compute-0 nova_compute[189387]: 2025-11-26 23:32:33.351 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:32:33 compute-0 nova_compute[189387]: 2025-11-26 23:32:33.351 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:32:33 compute-0 nova_compute[189387]: 2025-11-26 23:32:33.352 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:32:33 compute-0 nova_compute[189387]: 2025-11-26 23:32:33.352 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.678 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.708 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.709 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.710 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.740 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.741 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.741 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.742 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:32:34 compute-0 podman[245811]: 2025-11-26 23:32:34.875585167 +0000 UTC m=+0.151057527 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.898 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.976 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:34 compute-0 nova_compute[189387]: 2025-11-26 23:32:34.978 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.046 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.047 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.113 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.114 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.178 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.211 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.225 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.297 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.298 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.362 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.363 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.384 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.424 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.424 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.483 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.493 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.591 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.592 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.666 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.668 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.729 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.730 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:32:35 compute-0 nova_compute[189387]: 2025-11-26 23:32:35.788 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.192 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.195 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4803MB free_disk=72.33419036865234GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.195 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.196 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.312 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.313 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.313 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.314 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.314 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.430 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.453 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.491 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:32:36 compute-0 nova_compute[189387]: 2025-11-26 23:32:36.493 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.297s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.844 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.845 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.855 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:32:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:36.856 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.488 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1572 Content-Type: application/json Date: Wed, 26 Nov 2025 23:32:36 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-158da440-6f21-4711-80a1-0e3e761bed20 x-openstack-request-id: req-158da440-6f21-4711-80a1-0e3e761bed20 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.488 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74", "name": "fvt_testing_server", "status": "ACTIVE", "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "user_id": "6ad061874c77438db2e6d8efb2b1400b", "metadata": {}, "hostId": "78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f", "image": {"id": "9615d08d-8a5e-4035-96a9-c9e590af081c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/9615d08d-8a5e-4035-96a9-c9e590af081c"}]}, "flavor": {"id": "20ec6b7d-6dc3-4091-bf6e-ad76423d378c", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/20ec6b7d-6dc3-4091-bf6e-ad76423d378c"}]}, "created": "2025-11-26T23:32:21Z", "updated": "2025-11-26T23:32:26Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T23:32:26.000000", "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.488 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 used request id req-158da440-6f21-4711-80a1-0e3e761bed20 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.490 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74', 'name': 'fvt_testing_server', 'flavor': {'id': '20ec6b7d-6dc3-4091-bf6e-ad76423d378c', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '9615d08d-8a5e-4035-96a9-c9e590af081c'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.494 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c', 'name': 'vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.499 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.499 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.500 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.500 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.500 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.501 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:32:37.500397) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.501 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.502 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.503 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.503 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:32:37.503142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.511 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.516 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.517 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.517 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.519 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.519 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.520 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.521 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.521 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:32:37.518273) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:32:37.520521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.522 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.522 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.522 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.523 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.523 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:32:37.523050) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.558 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/cpu volume: 11110000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.596 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/cpu volume: 38340000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.627 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 44860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.628 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.628 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.628 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.628 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.628 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.629 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.629 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:32:37.629138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.630 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.631 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:32:37.632370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.663 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.664 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.664 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.695 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.695 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.696 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.727 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.727 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.728 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.730 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.730 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.731 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.732 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.732 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.733 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.733 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.734 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.734 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74: ceilometer.compute.pollsters.NoVolumeException
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.734 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.734 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.736 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.736 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.738 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.738 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.739 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.739 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.739 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.740 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.741 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.741 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:32:37.730155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:32:37.731992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:32:37.733854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:32:37.736415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T23:32:37.738378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:32:37.739614) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:32:37.741618) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.845 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.845 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.846 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 nova_compute[189387]: 2025-11-26 23:32:37.909 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:37 compute-0 nova_compute[189387]: 2025-11-26 23:32:37.910 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:37 compute-0 nova_compute[189387]: 2025-11-26 23:32:37.910 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:37 compute-0 nova_compute[189387]: 2025-11-26 23:32:37.910 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:37 compute-0 nova_compute[189387]: 2025-11-26 23:32:37.911 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.939 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.939 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:37.940 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.047 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.048 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.048 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.049 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.049 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.050 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.050 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.050 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.051 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.051 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.052 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.052 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.052 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.052 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.052 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.053 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.053 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.054 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.054 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.054 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.latency volume: 577921806 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.055 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.055 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.latency volume: 3966795 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.056 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 1305394210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.056 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 123508779 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.056 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 100732301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.057 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.057 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.058 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.059 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.059 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.059 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.060 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.060 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.061 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.061 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.061 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.062 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.062 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.062 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.063 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.064 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.064 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.064 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.064 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.065 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.065 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.065 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.066 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.066 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.066 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.067 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.068 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.068 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.069 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.069 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.070 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.070 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.071 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.071 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.071 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.072 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.072 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.073 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.073 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.073 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.073 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.074 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.074 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.075 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.075 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.075 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.075 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.075 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.076 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.076 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.076 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.076 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.077 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.077 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.077 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.077 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.078 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.078 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.078 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.079 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.079 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.080 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.080 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 2831606495 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.080 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 12954358 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.080 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.080 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.081 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.081 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.081 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.082 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.082 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.083 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.083 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:32:38.050255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.084 14 DEBUG ceilometer.compute.pollsters [-] eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.084 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.084 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.084 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.084 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.085 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.085 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:32:38.052668) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:32:38.054621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.086 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:32:38.059888) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:32:38.064707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:32:38.070824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:32:38.075023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:32:38.077823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:32:38.079504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:32:38.082262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:32:38.083371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:32:38.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T23:32:38.086625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:32:38 compute-0 nova_compute[189387]: 2025-11-26 23:32:38.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:38 compute-0 podman[245867]: 2025-11-26 23:32:38.806816319 +0000 UTC m=+0.089204984 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.182 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.388 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.576 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.577 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.577 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.578 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.578 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.580 189391 INFO nova.compute.manager [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Terminating instance#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.582 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "refresh_cache-eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.582 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquired lock "refresh_cache-eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:32:40 compute-0 nova_compute[189387]: 2025-11-26 23:32:40.582 189391 DEBUG nova.network.neutron [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:32:41 compute-0 nova_compute[189387]: 2025-11-26 23:32:41.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:41 compute-0 nova_compute[189387]: 2025-11-26 23:32:41.344 189391 DEBUG nova.network.neutron [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:32:41 compute-0 nova_compute[189387]: 2025-11-26 23:32:41.763 189391 DEBUG nova.network.neutron [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:32:41 compute-0 nova_compute[189387]: 2025-11-26 23:32:41.793 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Releasing lock "refresh_cache-eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:32:41 compute-0 nova_compute[189387]: 2025-11-26 23:32:41.794 189391 DEBUG nova.compute.manager [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:32:41 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 26 23:32:41 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 16.574s CPU time.
Nov 26 23:32:41 compute-0 systemd-machined[155674]: Machine qemu-5-instance-00000005 terminated.
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.093 189391 INFO nova.virt.libvirt.driver [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Instance destroyed successfully.#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.094 189391 DEBUG nova.objects.instance [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'resources' on Instance uuid eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.113 189391 INFO nova.virt.libvirt.driver [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Deleting instance files /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74_del#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.115 189391 INFO nova.virt.libvirt.driver [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Deletion of /var/lib/nova/instances/eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74_del complete#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.174 189391 INFO nova.compute.manager [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.174 189391 DEBUG oslo.service.loopingcall [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.175 189391 DEBUG nova.compute.manager [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:32:42 compute-0 nova_compute[189387]: 2025-11-26 23:32:42.175 189391 DEBUG nova.network.neutron [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.357 189391 DEBUG nova.network.neutron [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.375 189391 DEBUG nova.network.neutron [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.392 189391 INFO nova.compute.manager [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Took 1.22 seconds to deallocate network for instance.#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.441 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.442 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.584 189391 DEBUG nova.compute.provider_tree [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.603 189391 DEBUG nova.scheduler.client.report [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.626 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.658 189391 INFO nova.scheduler.client.report [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Deleted allocations for instance eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74#033[00m
Nov 26 23:32:43 compute-0 nova_compute[189387]: 2025-11-26 23:32:43.761 189391 DEBUG oslo_concurrency.lockutils [None req-843b557f-781e-48bc-89c3-5413e384e1dc 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:32:44 compute-0 podman[245904]: 2025-11-26 23:32:44.863897142 +0000 UTC m=+0.143827933 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 26 23:32:45 compute-0 nova_compute[189387]: 2025-11-26 23:32:45.184 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:45 compute-0 nova_compute[189387]: 2025-11-26 23:32:45.391 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:50 compute-0 nova_compute[189387]: 2025-11-26 23:32:50.187 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:50 compute-0 nova_compute[189387]: 2025-11-26 23:32:50.394 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:52 compute-0 podman[245925]: 2025-11-26 23:32:52.919527384 +0000 UTC m=+0.194712433 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 23:32:55 compute-0 nova_compute[189387]: 2025-11-26 23:32:55.191 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:55 compute-0 nova_compute[189387]: 2025-11-26 23:32:55.398 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:32:55 compute-0 podman[245954]: 2025-11-26 23:32:55.841054011 +0000 UTC m=+0.092104332 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 26 23:32:55 compute-0 podman[245955]: 2025-11-26 23:32:55.844280667 +0000 UTC m=+0.108696255 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:32:55 compute-0 podman[245956]: 2025-11-26 23:32:55.849645331 +0000 UTC m=+0.103225129 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, managed_by=edpm_ansible)
Nov 26 23:32:55 compute-0 podman[245953]: 2025-11-26 23:32:55.853948066 +0000 UTC m=+0.122594937 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:32:55 compute-0 podman[245952]: 2025-11-26 23:32:55.86793802 +0000 UTC m=+0.139292574 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, config_id=edpm, container_name=kepler, version=9.4, managed_by=edpm_ansible, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public)
Nov 26 23:32:57 compute-0 nova_compute[189387]: 2025-11-26 23:32:57.090 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764199962.0872564, eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:32:57 compute-0 nova_compute[189387]: 2025-11-26 23:32:57.091 189391 INFO nova.compute.manager [-] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:32:57 compute-0 nova_compute[189387]: 2025-11-26 23:32:57.118 189391 DEBUG nova.compute.manager [None req-64b086e0-18fc-4d32-a22f-5f306288eb99 - - - - - -] [instance: eae9f6b6-b657-4b7a-8b55-ab1d9b17fa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:32:59 compute-0 podman[203621]: time="2025-11-26T23:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:32:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:32:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:33:00 compute-0 nova_compute[189387]: 2025-11-26 23:33:00.194 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:00 compute-0 nova_compute[189387]: 2025-11-26 23:33:00.401 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:01 compute-0 openstack_network_exporter[205787]: ERROR   23:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:33:01 compute-0 openstack_network_exporter[205787]: ERROR   23:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:33:01 compute-0 openstack_network_exporter[205787]: ERROR   23:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:33:01 compute-0 openstack_network_exporter[205787]: ERROR   23:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:33:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:33:01 compute-0 openstack_network_exporter[205787]: ERROR   23:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:33:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:33:05 compute-0 nova_compute[189387]: 2025-11-26 23:33:05.197 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:05 compute-0 nova_compute[189387]: 2025-11-26 23:33:05.404 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:05 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Nov 26 23:33:05 compute-0 systemd[1]: session-30.scope: Consumed 1.332s CPU time.
Nov 26 23:33:05 compute-0 systemd-logind[819]: Session 30 logged out. Waiting for processes to exit.
Nov 26 23:33:05 compute-0 systemd-logind[819]: Removed session 30.
Nov 26 23:33:05 compute-0 podman[246049]: 2025-11-26 23:33:05.648596588 +0000 UTC m=+0.117861840 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd)
Nov 26 23:33:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:33:09.639 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:33:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:33:09.639 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:33:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:33:09.640 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:33:09 compute-0 podman[246069]: 2025-11-26 23:33:09.797661182 +0000 UTC m=+0.090411616 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:33:10 compute-0 nova_compute[189387]: 2025-11-26 23:33:10.201 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:10 compute-0 nova_compute[189387]: 2025-11-26 23:33:10.407 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:15 compute-0 nova_compute[189387]: 2025-11-26 23:33:15.204 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:15 compute-0 nova_compute[189387]: 2025-11-26 23:33:15.410 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:15 compute-0 podman[246092]: 2025-11-26 23:33:15.834651075 +0000 UTC m=+0.118319331 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:33:17 compute-0 systemd-logind[819]: New session 31 of user zuul.
Nov 26 23:33:17 compute-0 systemd[1]: Started Session 31 of User zuul.
Nov 26 23:33:18 compute-0 python3[246291]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:33:20 compute-0 nova_compute[189387]: 2025-11-26 23:33:20.207 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:20 compute-0 nova_compute[189387]: 2025-11-26 23:33:20.412 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:23 compute-0 podman[246329]: 2025-11-26 23:33:23.894602169 +0000 UTC m=+0.166740447 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 26 23:33:25 compute-0 nova_compute[189387]: 2025-11-26 23:33:25.209 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:25 compute-0 nova_compute[189387]: 2025-11-26 23:33:25.414 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:26 compute-0 podman[246504]: 2025-11-26 23:33:26.694649902 +0000 UTC m=+0.092257845 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 26 23:33:26 compute-0 podman[246502]: 2025-11-26 23:33:26.700392226 +0000 UTC m=+0.100817525 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release-0.7.12=, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 26 23:33:26 compute-0 podman[246506]: 2025-11-26 23:33:26.710445104 +0000 UTC m=+0.094815334 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, io.openshift.tags=minimal rhel9, version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal)
Nov 26 23:33:26 compute-0 podman[246505]: 2025-11-26 23:33:26.727389557 +0000 UTC m=+0.110377990 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 23:33:26 compute-0 podman[246503]: 2025-11-26 23:33:26.727923961 +0000 UTC m=+0.120687795 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:33:26 compute-0 python3[246615]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:33:29 compute-0 podman[203621]: time="2025-11-26T23:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:33:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:33:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Nov 26 23:33:30 compute-0 nova_compute[189387]: 2025-11-26 23:33:30.211 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:30 compute-0 nova_compute[189387]: 2025-11-26 23:33:30.417 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:31 compute-0 openstack_network_exporter[205787]: ERROR   23:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:33:31 compute-0 openstack_network_exporter[205787]: ERROR   23:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:33:31 compute-0 openstack_network_exporter[205787]: ERROR   23:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:33:31 compute-0 openstack_network_exporter[205787]: ERROR   23:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:33:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:33:31 compute-0 openstack_network_exporter[205787]: ERROR   23:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:33:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:33:34 compute-0 nova_compute[189387]: 2025-11-26 23:33:34.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:34 compute-0 nova_compute[189387]: 2025-11-26 23:33:34.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:33:34 compute-0 nova_compute[189387]: 2025-11-26 23:33:34.397 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:33:34 compute-0 nova_compute[189387]: 2025-11-26 23:33:34.398 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:33:34 compute-0 nova_compute[189387]: 2025-11-26 23:33:34.399 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:33:35 compute-0 nova_compute[189387]: 2025-11-26 23:33:35.214 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:35 compute-0 nova_compute[189387]: 2025-11-26 23:33:35.420 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:35 compute-0 podman[246694]: 2025-11-26 23:33:35.856202011 +0000 UTC m=+0.141690577 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 26 23:33:36 compute-0 python3[246859]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.441 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updating instance_info_cache with network_info: [{"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.460 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.461 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.462 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.463 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.464 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.465 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.466 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.501 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.501 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.502 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.503 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.613 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.694 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.696 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.751 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.753 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.818 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.819 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.879 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.886 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.981 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:37 compute-0 nova_compute[189387]: 2025-11-26 23:33:37.982 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.077 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.078 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.134 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.136 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.217 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.575 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.576 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=72.33485794067383GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.576 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.577 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.662 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.662 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.663 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.663 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.759 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.774 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.804 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:33:38 compute-0 nova_compute[189387]: 2025-11-26 23:33:38.805 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:33:40 compute-0 nova_compute[189387]: 2025-11-26 23:33:40.217 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:40 compute-0 nova_compute[189387]: 2025-11-26 23:33:40.422 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:40 compute-0 podman[246922]: 2025-11-26 23:33:40.822479087 +0000 UTC m=+0.111284995 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:33:41 compute-0 nova_compute[189387]: 2025-11-26 23:33:41.468 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:41 compute-0 nova_compute[189387]: 2025-11-26 23:33:41.470 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:42 compute-0 nova_compute[189387]: 2025-11-26 23:33:42.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:44 compute-0 nova_compute[189387]: 2025-11-26 23:33:44.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:45 compute-0 nova_compute[189387]: 2025-11-26 23:33:45.221 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:45 compute-0 nova_compute[189387]: 2025-11-26 23:33:45.426 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:46 compute-0 podman[246944]: 2025-11-26 23:33:46.830609551 +0000 UTC m=+0.119851624 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:33:50 compute-0 nova_compute[189387]: 2025-11-26 23:33:50.223 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:50 compute-0 nova_compute[189387]: 2025-11-26 23:33:50.428 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:51 compute-0 nova_compute[189387]: 2025-11-26 23:33:51.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:33:51 compute-0 python3[247139]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 26 23:33:54 compute-0 podman[247177]: 2025-11-26 23:33:54.865020241 +0000 UTC m=+0.148816197 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 26 23:33:55 compute-0 nova_compute[189387]: 2025-11-26 23:33:55.226 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:55 compute-0 nova_compute[189387]: 2025-11-26 23:33:55.432 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:33:57 compute-0 podman[247206]: 2025-11-26 23:33:57.842369348 +0000 UTC m=+0.100534836 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi)
Nov 26 23:33:57 compute-0 podman[247204]: 2025-11-26 23:33:57.846775107 +0000 UTC m=+0.115363624 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:33:57 compute-0 podman[247207]: 2025-11-26 23:33:57.859008633 +0000 UTC m=+0.111335525 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, config_id=edpm, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Nov 26 23:33:57 compute-0 podman[247205]: 2025-11-26 23:33:57.864395927 +0000 UTC m=+0.116685798 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:33:57 compute-0 podman[247203]: 2025-11-26 23:33:57.874013384 +0000 UTC m=+0.148515969 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, name=ubi9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Nov 26 23:33:59 compute-0 podman[203621]: time="2025-11-26T23:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:33:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:33:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 26 23:34:00 compute-0 nova_compute[189387]: 2025-11-26 23:34:00.229 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:00 compute-0 nova_compute[189387]: 2025-11-26 23:34:00.435 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:01 compute-0 openstack_network_exporter[205787]: ERROR   23:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:34:01 compute-0 openstack_network_exporter[205787]: ERROR   23:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:34:01 compute-0 openstack_network_exporter[205787]: ERROR   23:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:34:01 compute-0 openstack_network_exporter[205787]: ERROR   23:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:34:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:34:01 compute-0 openstack_network_exporter[205787]: ERROR   23:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:34:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:34:05 compute-0 nova_compute[189387]: 2025-11-26 23:34:05.233 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:05 compute-0 nova_compute[189387]: 2025-11-26 23:34:05.438 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:06 compute-0 podman[247300]: 2025-11-26 23:34:06.860740453 +0000 UTC m=+0.144498343 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:34:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:34:09.641 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:34:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:34:09.641 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:34:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:34:09.642 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:34:10 compute-0 nova_compute[189387]: 2025-11-26 23:34:10.237 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:10 compute-0 nova_compute[189387]: 2025-11-26 23:34:10.440 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:11 compute-0 podman[247319]: 2025-11-26 23:34:11.825564133 +0000 UTC m=+0.110875554 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:34:15 compute-0 nova_compute[189387]: 2025-11-26 23:34:15.238 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:15 compute-0 nova_compute[189387]: 2025-11-26 23:34:15.444 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:17 compute-0 podman[247342]: 2025-11-26 23:34:17.786965799 +0000 UTC m=+0.075005535 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Nov 26 23:34:20 compute-0 nova_compute[189387]: 2025-11-26 23:34:20.242 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:20 compute-0 nova_compute[189387]: 2025-11-26 23:34:20.447 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:25 compute-0 nova_compute[189387]: 2025-11-26 23:34:25.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:25 compute-0 nova_compute[189387]: 2025-11-26 23:34:25.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:34:25 compute-0 nova_compute[189387]: 2025-11-26 23:34:25.137 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:34:25 compute-0 nova_compute[189387]: 2025-11-26 23:34:25.244 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:25 compute-0 nova_compute[189387]: 2025-11-26 23:34:25.452 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:25 compute-0 podman[247362]: 2025-11-26 23:34:25.871882408 +0000 UTC m=+0.149895637 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:34:27 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 23:34:28 compute-0 podman[247402]: 2025-11-26 23:34:28.828520081 +0000 UTC m=+0.079162436 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64)
Nov 26 23:34:28 compute-0 podman[247389]: 2025-11-26 23:34:28.842738521 +0000 UTC m=+0.116233757 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:34:28 compute-0 podman[247388]: 2025-11-26 23:34:28.848481674 +0000 UTC m=+0.114559791 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.openshift.expose-services=, architecture=x86_64, io.buildah.version=1.29.0, release-0.7.12=)
Nov 26 23:34:28 compute-0 podman[247396]: 2025-11-26 23:34:28.849269925 +0000 UTC m=+0.095879342 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 26 23:34:28 compute-0 podman[247390]: 2025-11-26 23:34:28.868381786 +0000 UTC m=+0.118348403 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 26 23:34:29 compute-0 podman[203621]: time="2025-11-26T23:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:34:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:34:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 26 23:34:30 compute-0 nova_compute[189387]: 2025-11-26 23:34:30.247 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:30 compute-0 nova_compute[189387]: 2025-11-26 23:34:30.455 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:31 compute-0 openstack_network_exporter[205787]: ERROR   23:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:34:31 compute-0 openstack_network_exporter[205787]: ERROR   23:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:34:31 compute-0 openstack_network_exporter[205787]: ERROR   23:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:34:31 compute-0 openstack_network_exporter[205787]: ERROR   23:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:34:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:34:31 compute-0 openstack_network_exporter[205787]: ERROR   23:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:34:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.138 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.139 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.184 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.185 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.217 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.218 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.218 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.219 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.251 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.337 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.442 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.445 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.474 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.546 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.548 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.631 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.633 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.744 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.754 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.828 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.829 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.920 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:35 compute-0 nova_compute[189387]: 2025-11-26 23:34:35.922 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:36 compute-0 nova_compute[189387]: 2025-11-26 23:34:36.023 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:36 compute-0 nova_compute[189387]: 2025-11-26 23:34:36.027 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:34:36 compute-0 nova_compute[189387]: 2025-11-26 23:34:36.092 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:34:36 compute-0 nova_compute[189387]: 2025-11-26 23:34:36.560 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:34:36 compute-0 nova_compute[189387]: 2025-11-26 23:34:36.562 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4847MB free_disk=72.33487701416016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:34:36 compute-0 nova_compute[189387]: 2025-11-26 23:34:36.562 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:34:36 compute-0 nova_compute[189387]: 2025-11-26 23:34:36.562 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.845 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.846 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.846 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.162 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.162 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.163 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.163 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.167 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c', 'name': 'vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {'metering.server_group': '6ec897c5-079b-468e-ab49-e7a7350f9bc9'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.172 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'name': 'test_0', 'flavor': {'id': 'abcd883d-a9af-4dee-93ae-b5623bc853b6', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '422f324f-e13a-4c74-ba29-023e791ed636'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'dd2e793599b6418881c391df7f71e0c6', 'user_id': '6ad061874c77438db2e6d8efb2b1400b', 'hostId': '78fe62e880b703c207d346101c9f9f1436f7f233cb48d27a5485236f', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:34:37.176595) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:34:37.181658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.189 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.196 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.198 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.198 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.198 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.199 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:34:37.199179) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.200 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.200 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.201 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.201 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.201 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.202 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.202 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:34:37.201913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.203 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.204 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.204 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.206 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:34:37.205263) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.231 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.242 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/cpu volume: 40140000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.266 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/cpu volume: 46640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.267 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.267 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.267 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.267 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.267 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.268 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:34:37.267765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.268 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.268 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.268 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.268 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.269 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.269 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:34:37.269055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.307 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.307 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.307 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.307 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.308 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.343 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.346 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.347 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.347 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.348 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.348 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.349 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.349 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes volume: 2468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.350 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.351 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:34:37.349602) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.352 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:34:37.352705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.353 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.353 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.354 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.354 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.354 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.355 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.355 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.355 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:34:37.355232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.355 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.356 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.357 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.357 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.357 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.358 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.358 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.359 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:34:37.358477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.359 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.360 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.360 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.361 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.362 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.362 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:34:37.361744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:34:37.364539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.373 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.474 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.481 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.482 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.482 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.494 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.496 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.497 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:34:37 compute-0 nova_compute[189387]: 2025-11-26 23:34:37.499 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.596 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.596 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.597 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.598 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.598 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.598 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.599 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:34:37.599391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.600 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.601 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.601 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.601 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.602 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.602 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:34:37.602062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.604 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.605 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 1305394210 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.605 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 123508779 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.606 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.latency volume: 100732301 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.606 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 766490036 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.607 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 135917507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.607 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.latency volume: 99383059 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.608 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.609 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.609 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:34:37.604916) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.609 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.610 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.610 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.610 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.611 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:34:37.610164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.611 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.611 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.612 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.612 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.614 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.614 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.614 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.615 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.615 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.616 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.616 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.617 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.618 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.618 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.618 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.619 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.619 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.619 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.620 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:34:37.614458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.620 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:34:37.619675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.620 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.621 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.621 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.621 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.622 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.622 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.622 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.623 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.623 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.623 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.623 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.623 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:34:37.623292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.623 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.624 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.624 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.624 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.625 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.625 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.625 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.626 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.626 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.627 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.627 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:34:37.626125) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.628 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 2831606495 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.628 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:34:37.627840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.628 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 12954358 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.628 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.629 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 2067067389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.629 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 14796330 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.629 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.630 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.630 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.631 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:34:37.630686) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.631 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.631 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.632 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.632 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.633 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:34:37.632301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.633 14 DEBUG ceilometer.compute.pollsters [-] f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.633 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 231 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.633 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.634 14 DEBUG ceilometer.compute.pollsters [-] 3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.634 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.634 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.635 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.636 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.637 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.638 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:34:37.639 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:34:37 compute-0 podman[247505]: 2025-11-26 23:34:37.848699428 +0000 UTC m=+0.121046785 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:34:38 compute-0 nova_compute[189387]: 2025-11-26 23:34:38.452 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:38 compute-0 nova_compute[189387]: 2025-11-26 23:34:38.453 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:38 compute-0 nova_compute[189387]: 2025-11-26 23:34:38.453 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:38 compute-0 nova_compute[189387]: 2025-11-26 23:34:38.453 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:38 compute-0 nova_compute[189387]: 2025-11-26 23:34:38.453 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:34:40 compute-0 nova_compute[189387]: 2025-11-26 23:34:40.255 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:40 compute-0 nova_compute[189387]: 2025-11-26 23:34:40.477 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:42 compute-0 nova_compute[189387]: 2025-11-26 23:34:42.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:42 compute-0 podman[247525]: 2025-11-26 23:34:42.814498694 +0000 UTC m=+0.096943411 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:34:43 compute-0 nova_compute[189387]: 2025-11-26 23:34:43.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:44 compute-0 nova_compute[189387]: 2025-11-26 23:34:44.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:45 compute-0 nova_compute[189387]: 2025-11-26 23:34:45.258 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:45 compute-0 nova_compute[189387]: 2025-11-26 23:34:45.481 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:48 compute-0 podman[247548]: 2025-11-26 23:34:48.81247462 +0000 UTC m=+0.097351003 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:34:50 compute-0 nova_compute[189387]: 2025-11-26 23:34:50.262 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:50 compute-0 nova_compute[189387]: 2025-11-26 23:34:50.484 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:51 compute-0 nova_compute[189387]: 2025-11-26 23:34:51.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:34:51 compute-0 nova_compute[189387]: 2025-11-26 23:34:51.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:34:51 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Nov 26 23:34:51 compute-0 systemd[1]: session-31.scope: Consumed 4.565s CPU time.
Nov 26 23:34:51 compute-0 systemd-logind[819]: Session 31 logged out. Waiting for processes to exit.
Nov 26 23:34:51 compute-0 systemd-logind[819]: Removed session 31.
Nov 26 23:34:55 compute-0 nova_compute[189387]: 2025-11-26 23:34:55.265 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:55 compute-0 nova_compute[189387]: 2025-11-26 23:34:55.488 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:34:56 compute-0 podman[247566]: 2025-11-26 23:34:56.879153285 +0000 UTC m=+0.159203694 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Nov 26 23:34:59 compute-0 podman[203621]: time="2025-11-26T23:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:34:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:34:59 compute-0 podman[247591]: 2025-11-26 23:34:59.830806186 +0000 UTC m=+0.125988712 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 23:34:59 compute-0 podman[247597]: 2025-11-26 23:34:59.843880588 +0000 UTC m=+0.118410299 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 26 23:34:59 compute-0 podman[247592]: 2025-11-26 23:34:59.849201391 +0000 UTC m=+0.134266195 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:34:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 26 23:34:59 compute-0 podman[247607]: 2025-11-26 23:34:59.86070679 +0000 UTC m=+0.125506460 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, container_name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:34:59 compute-0 podman[247599]: 2025-11-26 23:34:59.865817956 +0000 UTC m=+0.119036166 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:35:00 compute-0 nova_compute[189387]: 2025-11-26 23:35:00.267 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:00 compute-0 nova_compute[189387]: 2025-11-26 23:35:00.491 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:01 compute-0 openstack_network_exporter[205787]: ERROR   23:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:35:01 compute-0 openstack_network_exporter[205787]: ERROR   23:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:35:01 compute-0 openstack_network_exporter[205787]: ERROR   23:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:35:01 compute-0 openstack_network_exporter[205787]: ERROR   23:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:35:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:35:01 compute-0 openstack_network_exporter[205787]: ERROR   23:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:35:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:35:05 compute-0 nova_compute[189387]: 2025-11-26 23:35:05.271 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:05 compute-0 nova_compute[189387]: 2025-11-26 23:35:05.494 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:08 compute-0 podman[247685]: 2025-11-26 23:35:08.867779296 +0000 UTC m=+0.151202439 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:35:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:09.642 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:09.643 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:09.643 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:35:10 compute-0 nova_compute[189387]: 2025-11-26 23:35:10.275 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:10 compute-0 nova_compute[189387]: 2025-11-26 23:35:10.497 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:13 compute-0 podman[247705]: 2025-11-26 23:35:13.81331937 +0000 UTC m=+0.092064752 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:35:15 compute-0 nova_compute[189387]: 2025-11-26 23:35:15.277 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:15 compute-0 nova_compute[189387]: 2025-11-26 23:35:15.501 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:19 compute-0 podman[247729]: 2025-11-26 23:35:19.854529252 +0000 UTC m=+0.137204063 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 23:35:20 compute-0 nova_compute[189387]: 2025-11-26 23:35:20.280 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:20 compute-0 nova_compute[189387]: 2025-11-26 23:35:20.503 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:25 compute-0 nova_compute[189387]: 2025-11-26 23:35:25.283 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:25 compute-0 nova_compute[189387]: 2025-11-26 23:35:25.508 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:27 compute-0 podman[247749]: 2025-11-26 23:35:27.899342754 +0000 UTC m=+0.182379376 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:35:29 compute-0 podman[203621]: time="2025-11-26T23:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:35:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:35:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:35:30 compute-0 nova_compute[189387]: 2025-11-26 23:35:30.286 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:30 compute-0 nova_compute[189387]: 2025-11-26 23:35:30.511 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:30 compute-0 podman[247774]: 2025-11-26 23:35:30.831343538 +0000 UTC m=+0.101446793 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:35:30 compute-0 podman[247783]: 2025-11-26 23:35:30.837225767 +0000 UTC m=+0.095621268 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, io.buildah.version=1.33.7, distribution-scope=public, config_id=edpm, vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, maintainer=Red Hat, Inc.)
Nov 26 23:35:30 compute-0 podman[247775]: 2025-11-26 23:35:30.838410208 +0000 UTC m=+0.114398761 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:35:30 compute-0 podman[247776]: 2025-11-26 23:35:30.843911586 +0000 UTC m=+0.113123517 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 23:35:30 compute-0 podman[247773]: 2025-11-26 23:35:30.852194678 +0000 UTC m=+0.132569838 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64)
Nov 26 23:35:31 compute-0 openstack_network_exporter[205787]: ERROR   23:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:35:31 compute-0 openstack_network_exporter[205787]: ERROR   23:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:35:31 compute-0 openstack_network_exporter[205787]: ERROR   23:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:35:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:35:31 compute-0 openstack_network_exporter[205787]: ERROR   23:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:35:31 compute-0 openstack_network_exporter[205787]: ERROR   23:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:35:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.148 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.172 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.172 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.173 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.173 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.257 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.290 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.323 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.324 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.429 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.431 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.513 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.515 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.544 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.613 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.623 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.703 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.704 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.765 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.767 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.829 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.831 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:35:35 compute-0 nova_compute[189387]: 2025-11-26 23:35:35.930 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.690 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.692 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4841MB free_disk=72.33487701416016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.692 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.692 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.840 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 3214d9e6-3c61-49f0-a353-01201a6aa6db actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.841 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.841 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.842 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.915 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.934 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.937 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:35:36 compute-0 nova_compute[189387]: 2025-11-26 23:35:36.937 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:35:37 compute-0 nova_compute[189387]: 2025-11-26 23:35:37.915 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:37 compute-0 nova_compute[189387]: 2025-11-26 23:35:37.916 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:35:37 compute-0 nova_compute[189387]: 2025-11-26 23:35:37.916 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:35:38 compute-0 nova_compute[189387]: 2025-11-26 23:35:38.572 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:35:38 compute-0 nova_compute[189387]: 2025-11-26 23:35:38.573 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:35:38 compute-0 nova_compute[189387]: 2025-11-26 23:35:38.573 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:35:38 compute-0 nova_compute[189387]: 2025-11-26 23:35:38.574 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:35:39 compute-0 podman[247891]: 2025-11-26 23:35:39.828620115 +0000 UTC m=+0.126748482 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:35:39 compute-0 nova_compute[189387]: 2025-11-26 23:35:39.891 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [{"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:35:39 compute-0 nova_compute[189387]: 2025-11-26 23:35:39.911 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-3214d9e6-3c61-49f0-a353-01201a6aa6db" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:35:39 compute-0 nova_compute[189387]: 2025-11-26 23:35:39.912 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:35:39 compute-0 nova_compute[189387]: 2025-11-26 23:35:39.913 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:39 compute-0 nova_compute[189387]: 2025-11-26 23:35:39.914 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:39 compute-0 nova_compute[189387]: 2025-11-26 23:35:39.915 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:39 compute-0 nova_compute[189387]: 2025-11-26 23:35:39.916 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:35:40 compute-0 nova_compute[189387]: 2025-11-26 23:35:40.293 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:40 compute-0 nova_compute[189387]: 2025-11-26 23:35:40.549 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:42 compute-0 nova_compute[189387]: 2025-11-26 23:35:42.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:42 compute-0 nova_compute[189387]: 2025-11-26 23:35:42.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:44 compute-0 nova_compute[189387]: 2025-11-26 23:35:44.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:44 compute-0 podman[247910]: 2025-11-26 23:35:44.777899012 +0000 UTC m=+0.086256326 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:35:45 compute-0 nova_compute[189387]: 2025-11-26 23:35:45.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:45 compute-0 nova_compute[189387]: 2025-11-26 23:35:45.293 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:45 compute-0 nova_compute[189387]: 2025-11-26 23:35:45.552 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:50 compute-0 nova_compute[189387]: 2025-11-26 23:35:50.296 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:50 compute-0 nova_compute[189387]: 2025-11-26 23:35:50.555 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:50 compute-0 podman[247936]: 2025-11-26 23:35:50.835902816 +0000 UTC m=+0.111617717 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:35:54 compute-0 nova_compute[189387]: 2025-11-26 23:35:54.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.117 189391 DEBUG nova.compute.manager [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received event network-changed-31b6bc9a-cd65-44ef-96ea-c84d392117c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.119 189391 DEBUG nova.compute.manager [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Refreshing instance network info cache due to event network-changed-31b6bc9a-cd65-44ef-96ea-c84d392117c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.119 189391 DEBUG oslo_concurrency.lockutils [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.120 189391 DEBUG oslo_concurrency.lockutils [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.121 189391 DEBUG nova.network.neutron [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Refreshing network info cache for port 31b6bc9a-cd65-44ef-96ea-c84d392117c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.272 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.274 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.274 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.275 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.298 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.383 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.384 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.384 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.385 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.386 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.388 189391 INFO nova.compute.manager [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Terminating instance#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.390 189391 DEBUG nova.compute.manager [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:35:55 compute-0 kernel: tap31b6bc9a-cd (unregistering): left promiscuous mode
Nov 26 23:35:55 compute-0 NetworkManager[56227]: <info>  [1764200155.4511] device (tap31b6bc9a-cd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:35:55 compute-0 ovn_controller[97697]: 2025-11-26T23:35:55Z|00058|binding|INFO|Releasing lport 31b6bc9a-cd65-44ef-96ea-c84d392117c8 from this chassis (sb_readonly=0)
Nov 26 23:35:55 compute-0 ovn_controller[97697]: 2025-11-26T23:35:55Z|00059|binding|INFO|Setting lport 31b6bc9a-cd65-44ef-96ea-c84d392117c8 down in Southbound
Nov 26 23:35:55 compute-0 ovn_controller[97697]: 2025-11-26T23:35:55Z|00060|binding|INFO|Removing iface tap31b6bc9a-cd ovn-installed in OVS
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.470 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.480 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:3f:da 192.168.0.69'], port_security=['fa:16:3e:22:3f:da 192.168.0.69'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nvijrfhdmirp-gcwraztym6um-bi3jxhg2edck-port-6sibpc4dfvzn', 'neutron:cidrs': '192.168.0.69/24', 'neutron:device_id': 'f0ac9c29-04ba-4737-8af6-8fc91e451e8c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nvijrfhdmirp-gcwraztym6um-bi3jxhg2edck-port-6sibpc4dfvzn', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=31b6bc9a-cd65-44ef-96ea-c84d392117c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.482 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 31b6bc9a-cd65-44ef-96ea-c84d392117c8 in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 unbound from our chassis#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.483 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.485 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.508 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[94a2e992-7455-4d71-b9f7-20cc781d050b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:35:55 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 26 23:35:55 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 4.607s CPU time.
Nov 26 23:35:55 compute-0 systemd-machined[155674]: Machine qemu-4-instance-00000004 terminated.
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.553 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[afd1e725-1d29-46a3-9b00-45cfea61879d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.557 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.559 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6dff6343-d2f9-4cac-bd4a-c5b89d27a1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.610 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2e1799-01e2-4329-8f14-ff308ad4c2ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.631 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.639 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.645 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[45d3a382-dd60-4122-b54e-2542ae277b0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap16c31f2c-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f4:bc:ed'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383451, 'reachable_time': 19076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247970, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.667 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[12d53725-7a7b-4ff4-9a38-16523efb9c0e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383460, 'tstamp': 383460}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247977, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tap16c31f2c-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 383463, 'tstamp': 383463}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247977, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.670 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.672 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.679 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.680 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap16c31f2c-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.681 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.681 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap16c31f2c-50, col_values=(('external_ids', {'iface-id': 'fcca7a28-5262-4637-8ef9-d543dee768b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:35:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:35:55.682 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.729 189391 INFO nova.virt.libvirt.driver [-] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Instance destroyed successfully.#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.730 189391 DEBUG nova.objects.instance [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'resources' on Instance uuid f0ac9c29-04ba-4737-8af6-8fc91e451e8c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.753 189391 DEBUG nova.virt.libvirt.vif [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:25:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-fhdmirp-gcwraztym6um-bi3jxhg2edck-vnf-4tssxs7u7dl3',id=4,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:25:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='6ec897c5-079b-468e-ab49-e7a7350f9bc9'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-55cchsee',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:25:53Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzIyMTkxOTAxMTM5NTU5MzQ4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 26 23:35:55 compute-0 nova_compute[189387]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzIyMTkxOTAxMTM5NTU5MzQ4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcyMjE5MTkwMTEzOTU1OTM0ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MjIxOTE5MDExMzk1NTkzNDg4PT0tLQo=',user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=f0ac9c29-04ba-4737-8af6-8fc91e451e8c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.753 189391 DEBUG nova.network.os_vif_util [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.754 189391 DEBUG nova.network.os_vif_util [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:3f:da,bridge_name='br-int',has_traffic_filtering=True,id=31b6bc9a-cd65-44ef-96ea-c84d392117c8,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31b6bc9a-cd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.755 189391 DEBUG os_vif [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:3f:da,bridge_name='br-int',has_traffic_filtering=True,id=31b6bc9a-cd65-44ef-96ea-c84d392117c8,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31b6bc9a-cd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.757 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.757 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap31b6bc9a-cd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.759 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 rsyslogd[236865]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.761 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.762 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.766 189391 INFO os_vif [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:3f:da,bridge_name='br-int',has_traffic_filtering=True,id=31b6bc9a-cd65-44ef-96ea-c84d392117c8,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap31b6bc9a-cd')#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.767 189391 INFO nova.virt.libvirt.driver [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Deleting instance files /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c_del#033[00m
Nov 26 23:35:55 compute-0 rsyslogd[236865]: message too long (8192) with configured size 8096, begin of message is: 2025-11-26 23:35:55.753 189391 DEBUG nova.virt.libvirt.vif [None req-d9fa3404-97 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.768 189391 INFO nova.virt.libvirt.driver [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Deletion of /var/lib/nova/instances/f0ac9c29-04ba-4737-8af6-8fc91e451e8c_del complete#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.826 189391 INFO nova.compute.manager [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.827 189391 DEBUG oslo.service.loopingcall [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.827 189391 DEBUG nova.compute.manager [-] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.827 189391 DEBUG nova.network.neutron [-] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.836 189391 DEBUG nova.compute.manager [req-ba8607d8-969a-4b96-a576-a86d20425c42 req-ba021841-2e54-43ff-aeef-b320b4b2a929 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received event network-vif-unplugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.836 189391 DEBUG oslo_concurrency.lockutils [req-ba8607d8-969a-4b96-a576-a86d20425c42 req-ba021841-2e54-43ff-aeef-b320b4b2a929 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.836 189391 DEBUG oslo_concurrency.lockutils [req-ba8607d8-969a-4b96-a576-a86d20425c42 req-ba021841-2e54-43ff-aeef-b320b4b2a929 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.836 189391 DEBUG oslo_concurrency.lockutils [req-ba8607d8-969a-4b96-a576-a86d20425c42 req-ba021841-2e54-43ff-aeef-b320b4b2a929 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.836 189391 DEBUG nova.compute.manager [req-ba8607d8-969a-4b96-a576-a86d20425c42 req-ba021841-2e54-43ff-aeef-b320b4b2a929 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] No waiting events found dispatching network-vif-unplugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:35:55 compute-0 nova_compute[189387]: 2025-11-26 23:35:55.837 189391 DEBUG nova.compute.manager [req-ba8607d8-969a-4b96-a576-a86d20425c42 req-ba021841-2e54-43ff-aeef-b320b4b2a929 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received event network-vif-unplugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:35:56 compute-0 nova_compute[189387]: 2025-11-26 23:35:56.153 189391 DEBUG nova.network.neutron [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updated VIF entry in instance network info cache for port 31b6bc9a-cd65-44ef-96ea-c84d392117c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:35:56 compute-0 nova_compute[189387]: 2025-11-26 23:35:56.154 189391 DEBUG nova.network.neutron [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updating instance_info_cache with network_info: [{"id": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "address": "fa:16:3e:22:3f:da", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.69", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap31b6bc9a-cd", "ovs_interfaceid": "31b6bc9a-cd65-44ef-96ea-c84d392117c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:35:56 compute-0 nova_compute[189387]: 2025-11-26 23:35:56.176 189391 DEBUG oslo_concurrency.lockutils [req-f80c49cb-e686-476e-bec1-9a9275f5e75d req-bec6eb58-d9d5-49eb-81d1-776074e6b760 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-f0ac9c29-04ba-4737-8af6-8fc91e451e8c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:35:57 compute-0 nova_compute[189387]: 2025-11-26 23:35:57.951 189391 DEBUG nova.compute.manager [req-dc8f843d-2187-44cc-8688-04c5c7c645fa req-8c3aa20f-17e9-4a7d-88a3-0181221feaaa f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received event network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:35:57 compute-0 nova_compute[189387]: 2025-11-26 23:35:57.951 189391 DEBUG oslo_concurrency.lockutils [req-dc8f843d-2187-44cc-8688-04c5c7c645fa req-8c3aa20f-17e9-4a7d-88a3-0181221feaaa f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:57 compute-0 nova_compute[189387]: 2025-11-26 23:35:57.952 189391 DEBUG oslo_concurrency.lockutils [req-dc8f843d-2187-44cc-8688-04c5c7c645fa req-8c3aa20f-17e9-4a7d-88a3-0181221feaaa f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:57 compute-0 nova_compute[189387]: 2025-11-26 23:35:57.953 189391 DEBUG oslo_concurrency.lockutils [req-dc8f843d-2187-44cc-8688-04c5c7c645fa req-8c3aa20f-17e9-4a7d-88a3-0181221feaaa f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:35:57 compute-0 nova_compute[189387]: 2025-11-26 23:35:57.953 189391 DEBUG nova.compute.manager [req-dc8f843d-2187-44cc-8688-04c5c7c645fa req-8c3aa20f-17e9-4a7d-88a3-0181221feaaa f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] No waiting events found dispatching network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:35:57 compute-0 nova_compute[189387]: 2025-11-26 23:35:57.954 189391 WARNING nova.compute.manager [req-dc8f843d-2187-44cc-8688-04c5c7c645fa req-8c3aa20f-17e9-4a7d-88a3-0181221feaaa f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Received unexpected event network-vif-plugged-31b6bc9a-cd65-44ef-96ea-c84d392117c8 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 23:35:58 compute-0 podman[247990]: 2025-11-26 23:35:58.924317697 +0000 UTC m=+0.204434758 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.596 189391 DEBUG nova.network.neutron [-] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.616 189391 INFO nova.compute.manager [-] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Took 3.79 seconds to deallocate network for instance.#033[00m
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.669 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.669 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:35:59 compute-0 podman[203621]: time="2025-11-26T23:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:35:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:35:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4818 "" "Go-http-client/1.1"
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.778 189391 DEBUG nova.compute.provider_tree [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.800 189391 DEBUG nova.scheduler.client.report [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.835 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.858 189391 INFO nova.scheduler.client.report [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Deleted allocations for instance f0ac9c29-04ba-4737-8af6-8fc91e451e8c#033[00m
Nov 26 23:35:59 compute-0 nova_compute[189387]: 2025-11-26 23:35:59.957 189391 DEBUG oslo_concurrency.lockutils [None req-d9fa3404-976c-4352-9ad0-c0bb0eb2696b 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "f0ac9c29-04ba-4737-8af6-8fc91e451e8c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:00 compute-0 nova_compute[189387]: 2025-11-26 23:36:00.302 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:00 compute-0 nova_compute[189387]: 2025-11-26 23:36:00.760 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:01 compute-0 openstack_network_exporter[205787]: ERROR   23:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:36:01 compute-0 openstack_network_exporter[205787]: ERROR   23:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:36:01 compute-0 openstack_network_exporter[205787]: ERROR   23:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:36:01 compute-0 openstack_network_exporter[205787]: ERROR   23:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:36:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:36:01 compute-0 openstack_network_exporter[205787]: ERROR   23:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:36:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:36:01 compute-0 podman[248016]: 2025-11-26 23:36:01.799879635 +0000 UTC m=+0.081857908 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 23:36:01 compute-0 podman[248014]: 2025-11-26 23:36:01.811777085 +0000 UTC m=+0.100596141 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 26 23:36:01 compute-0 podman[248017]: 2025-11-26 23:36:01.817711234 +0000 UTC m=+0.091230810 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 26 23:36:01 compute-0 podman[248023]: 2025-11-26 23:36:01.827945418 +0000 UTC m=+0.105081431 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., version=9.6)
Nov 26 23:36:01 compute-0 podman[248015]: 2025-11-26 23:36:01.833986471 +0000 UTC m=+0.111656398 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:36:05 compute-0 nova_compute[189387]: 2025-11-26 23:36:05.305 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:05 compute-0 nova_compute[189387]: 2025-11-26 23:36:05.762 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:09.644 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:09.645 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:09.646 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:10 compute-0 nova_compute[189387]: 2025-11-26 23:36:10.306 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:10 compute-0 nova_compute[189387]: 2025-11-26 23:36:10.727 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200155.7253313, f0ac9c29-04ba-4737-8af6-8fc91e451e8c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:36:10 compute-0 nova_compute[189387]: 2025-11-26 23:36:10.727 189391 INFO nova.compute.manager [-] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:36:10 compute-0 nova_compute[189387]: 2025-11-26 23:36:10.757 189391 DEBUG nova.compute.manager [None req-c38833ed-00ce-4848-b47d-c855e626e8ae - - - - - -] [instance: f0ac9c29-04ba-4737-8af6-8fc91e451e8c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:36:10 compute-0 nova_compute[189387]: 2025-11-26 23:36:10.765 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:10 compute-0 podman[248115]: 2025-11-26 23:36:10.802896545 +0000 UTC m=+0.099550004 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:36:14 compute-0 nova_compute[189387]: 2025-11-26 23:36:14.940 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:14 compute-0 nova_compute[189387]: 2025-11-26 23:36:14.941 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:14 compute-0 nova_compute[189387]: 2025-11-26 23:36:14.942 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:14 compute-0 nova_compute[189387]: 2025-11-26 23:36:14.942 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:14 compute-0 nova_compute[189387]: 2025-11-26 23:36:14.943 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:14 compute-0 nova_compute[189387]: 2025-11-26 23:36:14.945 189391 INFO nova.compute.manager [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Terminating instance#033[00m
Nov 26 23:36:14 compute-0 nova_compute[189387]: 2025-11-26 23:36:14.946 189391 DEBUG nova.compute.manager [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:36:14 compute-0 kernel: tap3109b207-2f (unregistering): left promiscuous mode
Nov 26 23:36:15 compute-0 NetworkManager[56227]: <info>  [1764200175.0085] device (tap3109b207-2f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.014 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 ovn_controller[97697]: 2025-11-26T23:36:15Z|00061|binding|INFO|Releasing lport 3109b207-2fdd-46a4-8789-08fff2b3f916 from this chassis (sb_readonly=0)
Nov 26 23:36:15 compute-0 ovn_controller[97697]: 2025-11-26T23:36:15Z|00062|binding|INFO|Setting lport 3109b207-2fdd-46a4-8789-08fff2b3f916 down in Southbound
Nov 26 23:36:15 compute-0 ovn_controller[97697]: 2025-11-26T23:36:15Z|00063|binding|INFO|Removing iface tap3109b207-2f ovn-installed in OVS
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.026 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.035 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:c7:ca 192.168.0.4'], port_security=['fa:16:3e:bf:c7:ca 192.168.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.4/24', 'neutron:device_id': '3214d9e6-3c61-49f0-a353-01201a6aa6db', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dd2e793599b6418881c391df7f71e0c6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f63b4453-d311-40b9-8478-8f99967e0625', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.212'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef9a1501-6a1b-48e2-a80c-71a5e303b45d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=3109b207-2fdd-46a4-8789-08fff2b3f916) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.037 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 3109b207-2fdd-46a4-8789-08fff2b3f916 in datapath 16c31f2c-5dd2-49b9-b313-1ecd3b059554 unbound from our chassis#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.040 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 16c31f2c-5dd2-49b9-b313-1ecd3b059554, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.041 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[cd018790-bcd5-4d7a-8833-6f1f547e95ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.042 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554 namespace which is not needed anymore#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.068 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 26 23:36:15 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 10.723s CPU time.
Nov 26 23:36:15 compute-0 systemd-machined[155674]: Machine qemu-1-instance-00000001 terminated.
Nov 26 23:36:15 compute-0 podman[248139]: 2025-11-26 23:36:15.160690669 +0000 UTC m=+0.099651656 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.183 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.190 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.241 189391 INFO nova.virt.libvirt.driver [-] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Instance destroyed successfully.#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.242 189391 DEBUG nova.objects.instance [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lazy-loading 'resources' on Instance uuid 3214d9e6-3c61-49f0-a353-01201a6aa6db obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:36:15 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [NOTICE]   (239904) : haproxy version is 2.8.14-c23fe91
Nov 26 23:36:15 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [NOTICE]   (239904) : path to executable is /usr/sbin/haproxy
Nov 26 23:36:15 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [WARNING]  (239904) : Exiting Master process...
Nov 26 23:36:15 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [WARNING]  (239904) : Exiting Master process...
Nov 26 23:36:15 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [ALERT]    (239904) : Current worker (239906) exited with code 143 (Terminated)
Nov 26 23:36:15 compute-0 neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554[239899]: [WARNING]  (239904) : All workers exited. Exiting... (0)
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.271 189391 DEBUG nova.virt.libvirt.vif [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:18:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='422f324f-e13a-4c74-ba29-023e791ed636',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:19:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dd2e793599b6418881c391df7f71e0c6',ramdisk_id='',reservation_id='r-1pai8j0u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='422f324f-e13a-4c74-ba29-023e791ed636',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:19:09Z,user_data=None,user_id='6ad061874c77438db2e6d8efb2b1400b',uuid=3214d9e6-3c61-49f0-a353-01201a6aa6db,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.272 189391 DEBUG nova.network.os_vif_util [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converting VIF {"id": "3109b207-2fdd-46a4-8789-08fff2b3f916", "address": "fa:16:3e:bf:c7:ca", "network": {"id": "16c31f2c-5dd2-49b9-b313-1ecd3b059554", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.212", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dd2e793599b6418881c391df7f71e0c6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3109b207-2f", "ovs_interfaceid": "3109b207-2fdd-46a4-8789-08fff2b3f916", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.272 189391 DEBUG nova.network.os_vif_util [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:c7:ca,bridge_name='br-int',has_traffic_filtering=True,id=3109b207-2fdd-46a4-8789-08fff2b3f916,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3109b207-2f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.273 189391 DEBUG os_vif [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:c7:ca,bridge_name='br-int',has_traffic_filtering=True,id=3109b207-2fdd-46a4-8789-08fff2b3f916,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3109b207-2f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.274 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.274 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3109b207-2f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:36:15 compute-0 systemd[1]: libpod-a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a.scope: Deactivated successfully.
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.278 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:36:15 compute-0 podman[248185]: 2025-11-26 23:36:15.280845874 +0000 UTC m=+0.069135447 container died a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.281 189391 INFO os_vif [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:c7:ca,bridge_name='br-int',has_traffic_filtering=True,id=3109b207-2fdd-46a4-8789-08fff2b3f916,network=Network(16c31f2c-5dd2-49b9-b313-1ecd3b059554),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3109b207-2f')#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.281 189391 INFO nova.virt.libvirt.driver [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Deleting instance files /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db_del#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.282 189391 INFO nova.virt.libvirt.driver [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Deletion of /var/lib/nova/instances/3214d9e6-3c61-49f0-a353-01201a6aa6db_del complete#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.307 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a-userdata-shm.mount: Deactivated successfully.
Nov 26 23:36:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-61e1ba994a8bc183260745e88cbd864581c7ee91b172595fef910c4a4f694f61-merged.mount: Deactivated successfully.
Nov 26 23:36:15 compute-0 podman[248185]: 2025-11-26 23:36:15.324383092 +0000 UTC m=+0.112672665 container cleanup a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:36:15 compute-0 systemd[1]: libpod-conmon-a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a.scope: Deactivated successfully.
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.345 189391 INFO nova.compute.manager [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.345 189391 DEBUG oslo.service.loopingcall [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.346 189391 DEBUG nova.compute.manager [-] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.346 189391 DEBUG nova.network.neutron [-] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:36:15 compute-0 podman[248227]: 2025-11-26 23:36:15.404595535 +0000 UTC m=+0.052570272 container remove a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.413 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bb03a212-7310-4b5e-bb6b-a0ae81496f03]: (4, ('Wed Nov 26 11:36:15 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554 (a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a)\na7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a\nWed Nov 26 11:36:15 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554 (a7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a)\na7979d3e4aabd151746f0fb9ffc013f9762c18d0c40bdde656a196d564f5a79a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.415 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[786cafc2-0e09-4171-b1fe-8eac0ba98732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.416 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap16c31f2c-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.418 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 kernel: tap16c31f2c-50: left promiscuous mode
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.421 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.424 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f213bc2f-03d7-46d0-8c10-130747c0809f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.445 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.455 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c830beaf-323c-4ddc-8ab2-ee8ad147c609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.457 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[93710865-765d-4677-a6ef-5daa2bdf4332]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.473 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[896bc8b5-5480-40af-bf39-48f15012209e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 383439, 'reachable_time': 15646, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248241, 'error': None, 'target': 'ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d16c31f2c\x2d5dd2\x2d49b9\x2db313\x2d1ecd3b059554.mount: Deactivated successfully.
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.485 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-16c31f2c-5dd2-49b9-b313-1ecd3b059554 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:36:15 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:36:15.486 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[ea1361e9-61c1-4ed2-b6d1-011936ed493f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.800 189391 DEBUG nova.compute.manager [req-adc92ad5-ffd3-4253-8118-dc1e7f333c3e req-26b936a7-a5f8-4f94-a5fa-87319e21cf0a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-vif-unplugged-3109b207-2fdd-46a4-8789-08fff2b3f916 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.800 189391 DEBUG oslo_concurrency.lockutils [req-adc92ad5-ffd3-4253-8118-dc1e7f333c3e req-26b936a7-a5f8-4f94-a5fa-87319e21cf0a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.800 189391 DEBUG oslo_concurrency.lockutils [req-adc92ad5-ffd3-4253-8118-dc1e7f333c3e req-26b936a7-a5f8-4f94-a5fa-87319e21cf0a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.800 189391 DEBUG oslo_concurrency.lockutils [req-adc92ad5-ffd3-4253-8118-dc1e7f333c3e req-26b936a7-a5f8-4f94-a5fa-87319e21cf0a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.801 189391 DEBUG nova.compute.manager [req-adc92ad5-ffd3-4253-8118-dc1e7f333c3e req-26b936a7-a5f8-4f94-a5fa-87319e21cf0a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] No waiting events found dispatching network-vif-unplugged-3109b207-2fdd-46a4-8789-08fff2b3f916 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:36:15 compute-0 nova_compute[189387]: 2025-11-26 23:36:15.801 189391 DEBUG nova.compute.manager [req-adc92ad5-ffd3-4253-8118-dc1e7f333c3e req-26b936a7-a5f8-4f94-a5fa-87319e21cf0a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-vif-unplugged-3109b207-2fdd-46a4-8789-08fff2b3f916 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.212 189391 DEBUG nova.network.neutron [-] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.233 189391 INFO nova.compute.manager [-] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Took 0.89 seconds to deallocate network for instance.#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.287 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.288 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.303 189391 DEBUG nova.compute.manager [req-e8afa8c9-975e-47f1-8c88-2b67a8159f43 req-ab38e2af-1695-4bb3-aa95-b8ced04a84e9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-vif-deleted-3109b207-2fdd-46a4-8789-08fff2b3f916 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.389 189391 DEBUG nova.compute.provider_tree [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.403 189391 DEBUG nova.scheduler.client.report [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.427 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.458 189391 INFO nova.scheduler.client.report [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Deleted allocations for instance 3214d9e6-3c61-49f0-a353-01201a6aa6db#033[00m
Nov 26 23:36:16 compute-0 nova_compute[189387]: 2025-11-26 23:36:16.551 189391 DEBUG oslo_concurrency.lockutils [None req-1cf74bd7-9224-4d8f-9ab1-6072de0796ae 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:17 compute-0 nova_compute[189387]: 2025-11-26 23:36:17.892 189391 DEBUG nova.compute.manager [req-5d107d4f-4fbf-4a50-a3e4-845b809b1aac req-77cb43b5-839c-4df1-a282-ce301dd7e1c6 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received event network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:36:17 compute-0 nova_compute[189387]: 2025-11-26 23:36:17.893 189391 DEBUG oslo_concurrency.lockutils [req-5d107d4f-4fbf-4a50-a3e4-845b809b1aac req-77cb43b5-839c-4df1-a282-ce301dd7e1c6 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:17 compute-0 nova_compute[189387]: 2025-11-26 23:36:17.894 189391 DEBUG oslo_concurrency.lockutils [req-5d107d4f-4fbf-4a50-a3e4-845b809b1aac req-77cb43b5-839c-4df1-a282-ce301dd7e1c6 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:17 compute-0 nova_compute[189387]: 2025-11-26 23:36:17.894 189391 DEBUG oslo_concurrency.lockutils [req-5d107d4f-4fbf-4a50-a3e4-845b809b1aac req-77cb43b5-839c-4df1-a282-ce301dd7e1c6 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "3214d9e6-3c61-49f0-a353-01201a6aa6db-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:17 compute-0 nova_compute[189387]: 2025-11-26 23:36:17.894 189391 DEBUG nova.compute.manager [req-5d107d4f-4fbf-4a50-a3e4-845b809b1aac req-77cb43b5-839c-4df1-a282-ce301dd7e1c6 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] No waiting events found dispatching network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:36:17 compute-0 nova_compute[189387]: 2025-11-26 23:36:17.895 189391 WARNING nova.compute.manager [req-5d107d4f-4fbf-4a50-a3e4-845b809b1aac req-77cb43b5-839c-4df1-a282-ce301dd7e1c6 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Received unexpected event network-vif-plugged-3109b207-2fdd-46a4-8789-08fff2b3f916 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:36:20 compute-0 nova_compute[189387]: 2025-11-26 23:36:20.278 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:20 compute-0 nova_compute[189387]: 2025-11-26 23:36:20.307 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:21 compute-0 podman[248244]: 2025-11-26 23:36:21.844613843 +0000 UTC m=+0.120826894 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:36:25 compute-0 nova_compute[189387]: 2025-11-26 23:36:25.282 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:25 compute-0 nova_compute[189387]: 2025-11-26 23:36:25.310 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:29 compute-0 podman[203621]: time="2025-11-26T23:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:36:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:36:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Nov 26 23:36:29 compute-0 podman[248265]: 2025-11-26 23:36:29.933235457 +0000 UTC m=+0.220054968 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 26 23:36:30 compute-0 nova_compute[189387]: 2025-11-26 23:36:30.240 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200175.238592, 3214d9e6-3c61-49f0-a353-01201a6aa6db => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:36:30 compute-0 nova_compute[189387]: 2025-11-26 23:36:30.241 189391 INFO nova.compute.manager [-] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:36:30 compute-0 nova_compute[189387]: 2025-11-26 23:36:30.271 189391 DEBUG nova.compute.manager [None req-53dfee15-43c2-4532-8378-e7c5b979efeb - - - - - -] [instance: 3214d9e6-3c61-49f0-a353-01201a6aa6db] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:36:30 compute-0 nova_compute[189387]: 2025-11-26 23:36:30.286 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:30 compute-0 nova_compute[189387]: 2025-11-26 23:36:30.313 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:31 compute-0 openstack_network_exporter[205787]: ERROR   23:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:36:31 compute-0 openstack_network_exporter[205787]: ERROR   23:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:36:31 compute-0 openstack_network_exporter[205787]: ERROR   23:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:36:31 compute-0 openstack_network_exporter[205787]: ERROR   23:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:36:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:36:31 compute-0 openstack_network_exporter[205787]: ERROR   23:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:36:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:36:32 compute-0 podman[248292]: 2025-11-26 23:36:32.832798829 +0000 UTC m=+0.107991130 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:36:32 compute-0 podman[248293]: 2025-11-26 23:36:32.848704676 +0000 UTC m=+0.108962106 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 26 23:36:32 compute-0 podman[248301]: 2025-11-26 23:36:32.863560545 +0000 UTC m=+0.114585297 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=)
Nov 26 23:36:32 compute-0 podman[248291]: 2025-11-26 23:36:32.888545164 +0000 UTC m=+0.173342192 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible)
Nov 26 23:36:32 compute-0 podman[248297]: 2025-11-26 23:36:32.892587263 +0000 UTC m=+0.133599926 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:36:35 compute-0 nova_compute[189387]: 2025-11-26 23:36:35.290 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:35 compute-0 nova_compute[189387]: 2025-11-26 23:36:35.316 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.846 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.847 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8d5ff50>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.869 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.870 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.871 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.872 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.873 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:36:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.157 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.158 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.158 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.159 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.199 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.199 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.200 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:37 compute-0 nova_compute[189387]: 2025-11-26 23:36:37.200 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.587 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.589 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5382MB free_disk=72.37899017333984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.589 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.590 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.817 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.818 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.862 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.884 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.910 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:36:38 compute-0 nova_compute[189387]: 2025-11-26 23:36:38.910 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:36:39 compute-0 nova_compute[189387]: 2025-11-26 23:36:39.877 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:40 compute-0 nova_compute[189387]: 2025-11-26 23:36:40.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:40 compute-0 nova_compute[189387]: 2025-11-26 23:36:40.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:40 compute-0 nova_compute[189387]: 2025-11-26 23:36:40.294 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:40 compute-0 nova_compute[189387]: 2025-11-26 23:36:40.319 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:41 compute-0 podman[248389]: 2025-11-26 23:36:41.837632975 +0000 UTC m=+0.121486532 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:36:42 compute-0 nova_compute[189387]: 2025-11-26 23:36:42.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:45 compute-0 nova_compute[189387]: 2025-11-26 23:36:45.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:45 compute-0 nova_compute[189387]: 2025-11-26 23:36:45.300 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:45 compute-0 nova_compute[189387]: 2025-11-26 23:36:45.323 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:45 compute-0 podman[248409]: 2025-11-26 23:36:45.784300752 +0000 UTC m=+0.082239989 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:36:46 compute-0 nova_compute[189387]: 2025-11-26 23:36:46.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:36:46 compute-0 ovn_controller[97697]: 2025-11-26T23:36:46Z|00064|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Nov 26 23:36:50 compute-0 nova_compute[189387]: 2025-11-26 23:36:50.306 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:50 compute-0 nova_compute[189387]: 2025-11-26 23:36:50.325 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:52 compute-0 podman[248433]: 2025-11-26 23:36:52.793853494 +0000 UTC m=+0.091335922 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 26 23:36:55 compute-0 nova_compute[189387]: 2025-11-26 23:36:55.310 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:55 compute-0 nova_compute[189387]: 2025-11-26 23:36:55.329 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:36:59 compute-0 podman[203621]: time="2025-11-26T23:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:36:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:36:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Nov 26 23:37:00 compute-0 nova_compute[189387]: 2025-11-26 23:37:00.314 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:00 compute-0 nova_compute[189387]: 2025-11-26 23:37:00.329 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:00 compute-0 podman[248452]: 2025-11-26 23:37:00.867065845 +0000 UTC m=+0.154434596 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 23:37:01 compute-0 openstack_network_exporter[205787]: ERROR   23:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:37:01 compute-0 openstack_network_exporter[205787]: ERROR   23:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:37:01 compute-0 openstack_network_exporter[205787]: ERROR   23:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:37:01 compute-0 openstack_network_exporter[205787]: ERROR   23:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:37:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:37:01 compute-0 openstack_network_exporter[205787]: ERROR   23:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:37:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:37:03 compute-0 podman[248479]: 2025-11-26 23:37:03.837030497 +0000 UTC m=+0.120101965 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, config_id=edpm, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release=1214.1726694543, version=9.4, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 26 23:37:03 compute-0 podman[248481]: 2025-11-26 23:37:03.839273557 +0000 UTC m=+0.109541012 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 23:37:03 compute-0 podman[248480]: 2025-11-26 23:37:03.844374493 +0000 UTC m=+0.115170621 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:37:03 compute-0 podman[248485]: 2025-11-26 23:37:03.873641469 +0000 UTC m=+0.137003828 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 23:37:03 compute-0 podman[248487]: 2025-11-26 23:37:03.87367968 +0000 UTC m=+0.125399736 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git)
Nov 26 23:37:05 compute-0 nova_compute[189387]: 2025-11-26 23:37:05.319 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:05 compute-0 nova_compute[189387]: 2025-11-26 23:37:05.333 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:37:09.645 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:37:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:37:09.645 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:37:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:37:09.646 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:37:10 compute-0 nova_compute[189387]: 2025-11-26 23:37:10.325 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:10 compute-0 nova_compute[189387]: 2025-11-26 23:37:10.337 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:12 compute-0 podman[248571]: 2025-11-26 23:37:12.841664575 +0000 UTC m=+0.131053798 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:37:15 compute-0 nova_compute[189387]: 2025-11-26 23:37:15.330 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:15 compute-0 nova_compute[189387]: 2025-11-26 23:37:15.338 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:16 compute-0 podman[248590]: 2025-11-26 23:37:16.814787283 +0000 UTC m=+0.098254067 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:37:20 compute-0 nova_compute[189387]: 2025-11-26 23:37:20.334 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:20 compute-0 nova_compute[189387]: 2025-11-26 23:37:20.341 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:23 compute-0 podman[248614]: 2025-11-26 23:37:23.829057682 +0000 UTC m=+0.114626468 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:37:25 compute-0 nova_compute[189387]: 2025-11-26 23:37:25.341 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:25 compute-0 nova_compute[189387]: 2025-11-26 23:37:25.344 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:29 compute-0 podman[203621]: time="2025-11-26T23:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:37:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:37:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4344 "" "Go-http-client/1.1"
Nov 26 23:37:30 compute-0 nova_compute[189387]: 2025-11-26 23:37:30.343 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:30 compute-0 nova_compute[189387]: 2025-11-26 23:37:30.346 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:31 compute-0 openstack_network_exporter[205787]: ERROR   23:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:37:31 compute-0 openstack_network_exporter[205787]: ERROR   23:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:37:31 compute-0 openstack_network_exporter[205787]: ERROR   23:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:37:31 compute-0 openstack_network_exporter[205787]: ERROR   23:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:37:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:37:31 compute-0 openstack_network_exporter[205787]: ERROR   23:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:37:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:37:31 compute-0 podman[248634]: 2025-11-26 23:37:31.90625948 +0000 UTC m=+0.184933595 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:37:34 compute-0 podman[248664]: 2025-11-26 23:37:34.816632932 +0000 UTC m=+0.082997479 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 23:37:34 compute-0 podman[248660]: 2025-11-26 23:37:34.836390892 +0000 UTC m=+0.116475087 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:37:34 compute-0 podman[248667]: 2025-11-26 23:37:34.844370646 +0000 UTC m=+0.113322142 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:37:34 compute-0 podman[248659]: 2025-11-26 23:37:34.855280729 +0000 UTC m=+0.144497109 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9)
Nov 26 23:37:34 compute-0 podman[248674]: 2025-11-26 23:37:34.869118551 +0000 UTC m=+0.124110272 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter)
Nov 26 23:37:35 compute-0 nova_compute[189387]: 2025-11-26 23:37:35.347 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:35 compute-0 nova_compute[189387]: 2025-11-26 23:37:35.349 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.149 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.150 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.190 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.191 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.191 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.191 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.607 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.608 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5369MB free_disk=72.38017654418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.608 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.609 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.688 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.688 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.720 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.737 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.740 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:37:37 compute-0 nova_compute[189387]: 2025-11-26 23:37:37.740 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:37:38 compute-0 nova_compute[189387]: 2025-11-26 23:37:38.716 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:38 compute-0 nova_compute[189387]: 2025-11-26 23:37:38.717 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.351 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.353 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.353 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.354 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.354 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:37:40 compute-0 nova_compute[189387]: 2025-11-26 23:37:40.356 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:42 compute-0 nova_compute[189387]: 2025-11-26 23:37:42.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:43 compute-0 podman[248754]: 2025-11-26 23:37:43.801748246 +0000 UTC m=+0.101309280 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:37:45 compute-0 nova_compute[189387]: 2025-11-26 23:37:45.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:45 compute-0 nova_compute[189387]: 2025-11-26 23:37:45.357 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:37:45 compute-0 nova_compute[189387]: 2025-11-26 23:37:45.358 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:37:45 compute-0 nova_compute[189387]: 2025-11-26 23:37:45.358 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:37:45 compute-0 nova_compute[189387]: 2025-11-26 23:37:45.359 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:37:45 compute-0 nova_compute[189387]: 2025-11-26 23:37:45.359 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:37:45 compute-0 nova_compute[189387]: 2025-11-26 23:37:45.362 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:47 compute-0 nova_compute[189387]: 2025-11-26 23:37:47.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:47 compute-0 podman[248772]: 2025-11-26 23:37:47.810801117 +0000 UTC m=+0.098895025 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:37:50 compute-0 nova_compute[189387]: 2025-11-26 23:37:50.361 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:54 compute-0 podman[248798]: 2025-11-26 23:37:54.794261388 +0000 UTC m=+0.091550548 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:37:55 compute-0 nova_compute[189387]: 2025-11-26 23:37:55.364 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:37:59 compute-0 nova_compute[189387]: 2025-11-26 23:37:59.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:37:59 compute-0 podman[203621]: time="2025-11-26T23:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:37:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:37:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Nov 26 23:38:00 compute-0 nova_compute[189387]: 2025-11-26 23:38:00.366 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:38:00 compute-0 nova_compute[189387]: 2025-11-26 23:38:00.368 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:00 compute-0 nova_compute[189387]: 2025-11-26 23:38:00.368 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:38:00 compute-0 nova_compute[189387]: 2025-11-26 23:38:00.368 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:38:00 compute-0 nova_compute[189387]: 2025-11-26 23:38:00.369 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:38:00 compute-0 nova_compute[189387]: 2025-11-26 23:38:00.371 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:01 compute-0 openstack_network_exporter[205787]: ERROR   23:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:38:01 compute-0 openstack_network_exporter[205787]: ERROR   23:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:38:01 compute-0 openstack_network_exporter[205787]: ERROR   23:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:38:01 compute-0 openstack_network_exporter[205787]: ERROR   23:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:38:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:38:01 compute-0 openstack_network_exporter[205787]: ERROR   23:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:38:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:38:02 compute-0 podman[248819]: 2025-11-26 23:38:02.902713844 +0000 UTC m=+0.186690462 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 26 23:38:05 compute-0 nova_compute[189387]: 2025-11-26 23:38:05.371 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:05 compute-0 podman[248844]: 2025-11-26 23:38:05.818544263 +0000 UTC m=+0.093986973 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:38:05 compute-0 podman[248845]: 2025-11-26 23:38:05.834429589 +0000 UTC m=+0.093217332 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 26 23:38:05 compute-0 podman[248846]: 2025-11-26 23:38:05.846961766 +0000 UTC m=+0.106129259 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:38:05 compute-0 podman[248843]: 2025-11-26 23:38:05.847768768 +0000 UTC m=+0.120927627 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, container_name=kepler, release-0.7.12=, version=9.4, maintainer=Red Hat, Inc.)
Nov 26 23:38:05 compute-0 podman[248856]: 2025-11-26 23:38:05.848105856 +0000 UTC m=+0.099785918 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, container_name=openstack_network_exporter, version=9.6, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Nov 26 23:38:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:38:09.647 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:38:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:38:09.647 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:38:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:38:09.648 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:38:10 compute-0 nova_compute[189387]: 2025-11-26 23:38:10.373 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:38:14 compute-0 podman[248938]: 2025-11-26 23:38:14.855228953 +0000 UTC m=+0.151571069 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:38:15 compute-0 nova_compute[189387]: 2025-11-26 23:38:15.375 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:38:15 compute-0 nova_compute[189387]: 2025-11-26 23:38:15.377 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:15 compute-0 nova_compute[189387]: 2025-11-26 23:38:15.377 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:38:15 compute-0 nova_compute[189387]: 2025-11-26 23:38:15.377 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:38:15 compute-0 nova_compute[189387]: 2025-11-26 23:38:15.377 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:38:18 compute-0 podman[248957]: 2025-11-26 23:38:18.823889339 +0000 UTC m=+0.116525189 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:38:20 compute-0 nova_compute[189387]: 2025-11-26 23:38:20.379 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:38:20 compute-0 nova_compute[189387]: 2025-11-26 23:38:20.381 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:25 compute-0 nova_compute[189387]: 2025-11-26 23:38:25.382 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:38:25 compute-0 nova_compute[189387]: 2025-11-26 23:38:25.384 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:25 compute-0 nova_compute[189387]: 2025-11-26 23:38:25.385 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:38:25 compute-0 nova_compute[189387]: 2025-11-26 23:38:25.385 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:38:25 compute-0 nova_compute[189387]: 2025-11-26 23:38:25.386 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:38:25 compute-0 nova_compute[189387]: 2025-11-26 23:38:25.388 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:25 compute-0 podman[248982]: 2025-11-26 23:38:25.807008921 +0000 UTC m=+0.106028407 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 23:38:29 compute-0 podman[203621]: time="2025-11-26T23:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:38:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:38:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Nov 26 23:38:30 compute-0 nova_compute[189387]: 2025-11-26 23:38:30.389 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:31 compute-0 openstack_network_exporter[205787]: ERROR   23:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:38:31 compute-0 openstack_network_exporter[205787]: ERROR   23:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:38:31 compute-0 openstack_network_exporter[205787]: ERROR   23:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:38:31 compute-0 openstack_network_exporter[205787]: ERROR   23:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:38:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:38:31 compute-0 openstack_network_exporter[205787]: ERROR   23:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:38:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:38:33 compute-0 podman[249003]: 2025-11-26 23:38:33.884256008 +0000 UTC m=+0.167710872 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 23:38:35 compute-0 nova_compute[189387]: 2025-11-26 23:38:35.390 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:35 compute-0 nova_compute[189387]: 2025-11-26 23:38:35.392 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:36 compute-0 podman[249030]: 2025-11-26 23:38:36.792015671 +0000 UTC m=+0.078321383 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:38:36 compute-0 podman[249029]: 2025-11-26 23:38:36.796733788 +0000 UTC m=+0.081502818 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:38:36 compute-0 podman[249031]: 2025-11-26 23:38:36.802822431 +0000 UTC m=+0.079183256 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:38:36 compute-0 podman[249032]: 2025-11-26 23:38:36.824419791 +0000 UTC m=+0.090444379 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Nov 26 23:38:36 compute-0 podman[249028]: 2025-11-26 23:38:36.831188053 +0000 UTC m=+0.113243501 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., version=9.4, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.847 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.847 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.847 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.849 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.852 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.854 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.855 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:38:36.856 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.195 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.196 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.196 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.196 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.646 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.648 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5364MB free_disk=72.38017272949219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.648 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.648 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.736 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.736 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.768 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.789 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.791 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:38:37 compute-0 nova_compute[189387]: 2025-11-26 23:38:37.792 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:38:38 compute-0 nova_compute[189387]: 2025-11-26 23:38:38.792 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:38 compute-0 nova_compute[189387]: 2025-11-26 23:38:38.793 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:38:38 compute-0 nova_compute[189387]: 2025-11-26 23:38:38.794 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:38:38 compute-0 nova_compute[189387]: 2025-11-26 23:38:38.811 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:38:38 compute-0 nova_compute[189387]: 2025-11-26 23:38:38.814 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:38 compute-0 nova_compute[189387]: 2025-11-26 23:38:38.814 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:38:40 compute-0 nova_compute[189387]: 2025-11-26 23:38:40.393 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:42 compute-0 nova_compute[189387]: 2025-11-26 23:38:42.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:42 compute-0 nova_compute[189387]: 2025-11-26 23:38:42.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:42 compute-0 nova_compute[189387]: 2025-11-26 23:38:42.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:43 compute-0 nova_compute[189387]: 2025-11-26 23:38:43.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:45 compute-0 nova_compute[189387]: 2025-11-26 23:38:45.396 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:45 compute-0 podman[249124]: 2025-11-26 23:38:45.827476778 +0000 UTC m=+0.117725521 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 23:38:46 compute-0 nova_compute[189387]: 2025-11-26 23:38:46.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:48 compute-0 nova_compute[189387]: 2025-11-26 23:38:48.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:38:49 compute-0 podman[249143]: 2025-11-26 23:38:49.859873615 +0000 UTC m=+0.145041395 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:38:50 compute-0 nova_compute[189387]: 2025-11-26 23:38:50.399 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:55 compute-0 nova_compute[189387]: 2025-11-26 23:38:55.401 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:38:56 compute-0 podman[249167]: 2025-11-26 23:38:56.81402901 +0000 UTC m=+0.105326108 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:38:59 compute-0 podman[203621]: time="2025-11-26T23:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:38:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:38:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4347 "" "Go-http-client/1.1"
Nov 26 23:39:00 compute-0 nova_compute[189387]: 2025-11-26 23:39:00.403 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:00 compute-0 nova_compute[189387]: 2025-11-26 23:39:00.405 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:01 compute-0 openstack_network_exporter[205787]: ERROR   23:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:39:01 compute-0 openstack_network_exporter[205787]: ERROR   23:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:39:01 compute-0 openstack_network_exporter[205787]: ERROR   23:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:39:01 compute-0 openstack_network_exporter[205787]: ERROR   23:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:39:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:39:01 compute-0 openstack_network_exporter[205787]: ERROR   23:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:39:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:39:04 compute-0 podman[249186]: 2025-11-26 23:39:04.838401381 +0000 UTC m=+0.126994610 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 26 23:39:05 compute-0 nova_compute[189387]: 2025-11-26 23:39:05.407 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:05 compute-0 nova_compute[189387]: 2025-11-26 23:39:05.408 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:05 compute-0 nova_compute[189387]: 2025-11-26 23:39:05.409 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:39:05 compute-0 nova_compute[189387]: 2025-11-26 23:39:05.409 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:05 compute-0 nova_compute[189387]: 2025-11-26 23:39:05.409 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:05 compute-0 nova_compute[189387]: 2025-11-26 23:39:05.410 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:07 compute-0 podman[249214]: 2025-11-26 23:39:07.786283798 +0000 UTC m=+0.073335683 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 23:39:07 compute-0 podman[249212]: 2025-11-26 23:39:07.795815552 +0000 UTC m=+0.087126410 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc.)
Nov 26 23:39:07 compute-0 podman[249215]: 2025-11-26 23:39:07.796292035 +0000 UTC m=+0.076656271 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:39:07 compute-0 podman[249213]: 2025-11-26 23:39:07.797378944 +0000 UTC m=+0.088476526 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:39:07 compute-0 podman[249218]: 2025-11-26 23:39:07.81452066 +0000 UTC m=+0.093072478 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 26 23:39:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:39:09.648 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:39:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:39:09.648 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:39:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:39:09.648 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:39:10 compute-0 nova_compute[189387]: 2025-11-26 23:39:10.411 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:15 compute-0 nova_compute[189387]: 2025-11-26 23:39:15.414 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:15 compute-0 nova_compute[189387]: 2025-11-26 23:39:15.416 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:16 compute-0 podman[249311]: 2025-11-26 23:39:16.844331514 +0000 UTC m=+0.127114466 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd)
Nov 26 23:39:18 compute-0 nova_compute[189387]: 2025-11-26 23:39:18.951 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:20 compute-0 nova_compute[189387]: 2025-11-26 23:39:20.416 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:20 compute-0 podman[249331]: 2025-11-26 23:39:20.826349776 +0000 UTC m=+0.120986582 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:39:25 compute-0 nova_compute[189387]: 2025-11-26 23:39:25.418 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:25 compute-0 nova_compute[189387]: 2025-11-26 23:39:25.420 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:25 compute-0 nova_compute[189387]: 2025-11-26 23:39:25.421 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:39:25 compute-0 nova_compute[189387]: 2025-11-26 23:39:25.421 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:25 compute-0 nova_compute[189387]: 2025-11-26 23:39:25.422 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:25 compute-0 nova_compute[189387]: 2025-11-26 23:39:25.423 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:27 compute-0 podman[249355]: 2025-11-26 23:39:27.789318618 +0000 UTC m=+0.078777139 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:39:29 compute-0 podman[203621]: time="2025-11-26T23:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:39:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:39:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Nov 26 23:39:30 compute-0 nova_compute[189387]: 2025-11-26 23:39:30.423 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:31 compute-0 nova_compute[189387]: 2025-11-26 23:39:31.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:31 compute-0 nova_compute[189387]: 2025-11-26 23:39:31.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:39:31 compute-0 nova_compute[189387]: 2025-11-26 23:39:31.332 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:39:31 compute-0 openstack_network_exporter[205787]: ERROR   23:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:39:31 compute-0 openstack_network_exporter[205787]: ERROR   23:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:39:31 compute-0 openstack_network_exporter[205787]: ERROR   23:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:39:31 compute-0 openstack_network_exporter[205787]: ERROR   23:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:39:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:39:31 compute-0 openstack_network_exporter[205787]: ERROR   23:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:39:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:39:35 compute-0 nova_compute[189387]: 2025-11-26 23:39:35.425 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:35 compute-0 nova_compute[189387]: 2025-11-26 23:39:35.427 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:35 compute-0 nova_compute[189387]: 2025-11-26 23:39:35.427 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:39:35 compute-0 nova_compute[189387]: 2025-11-26 23:39:35.428 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:35 compute-0 nova_compute[189387]: 2025-11-26 23:39:35.429 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:35 compute-0 nova_compute[189387]: 2025-11-26 23:39:35.431 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:35 compute-0 podman[249374]: 2025-11-26 23:39:35.874621828 +0000 UTC m=+0.164341876 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:39:38 compute-0 nova_compute[189387]: 2025-11-26 23:39:38.332 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:38 compute-0 nova_compute[189387]: 2025-11-26 23:39:38.333 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:39:38 compute-0 nova_compute[189387]: 2025-11-26 23:39:38.333 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:39:38 compute-0 nova_compute[189387]: 2025-11-26 23:39:38.352 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:39:38 compute-0 podman[249399]: 2025-11-26 23:39:38.784374322 +0000 UTC m=+0.077507564 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:39:38 compute-0 podman[249398]: 2025-11-26 23:39:38.81246213 +0000 UTC m=+0.098333749 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, managed_by=edpm_ansible, container_name=kepler, name=ubi9, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 23:39:38 compute-0 podman[249401]: 2025-11-26 23:39:38.822031925 +0000 UTC m=+0.094294661 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 23:39:38 compute-0 podman[249407]: 2025-11-26 23:39:38.824315416 +0000 UTC m=+0.091944769 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, config_id=edpm, distribution-scope=public)
Nov 26 23:39:38 compute-0 podman[249400]: 2025-11-26 23:39:38.844478182 +0000 UTC m=+0.118595197 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.172 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.172 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.173 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.173 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.617 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.619 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5357MB free_disk=72.38017272949219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.619 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.620 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.917 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:39:39 compute-0 nova_compute[189387]: 2025-11-26 23:39:39.919 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.041 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.187 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.188 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.207 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.232 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.264 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.280 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.282 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.282 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.429 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:40 compute-0 nova_compute[189387]: 2025-11-26 23:39:40.432 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:41 compute-0 nova_compute[189387]: 2025-11-26 23:39:41.283 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:41 compute-0 nova_compute[189387]: 2025-11-26 23:39:41.283 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:39:43 compute-0 nova_compute[189387]: 2025-11-26 23:39:43.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:44 compute-0 nova_compute[189387]: 2025-11-26 23:39:44.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:44 compute-0 nova_compute[189387]: 2025-11-26 23:39:44.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:45 compute-0 nova_compute[189387]: 2025-11-26 23:39:45.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:45 compute-0 nova_compute[189387]: 2025-11-26 23:39:45.432 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:46 compute-0 nova_compute[189387]: 2025-11-26 23:39:46.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:47 compute-0 podman[249489]: 2025-11-26 23:39:47.778413276 +0000 UTC m=+0.076348233 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:39:49 compute-0 nova_compute[189387]: 2025-11-26 23:39:49.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:50 compute-0 nova_compute[189387]: 2025-11-26 23:39:50.435 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:39:50 compute-0 nova_compute[189387]: 2025-11-26 23:39:50.437 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:50 compute-0 nova_compute[189387]: 2025-11-26 23:39:50.437 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Nov 26 23:39:50 compute-0 nova_compute[189387]: 2025-11-26 23:39:50.437 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:50 compute-0 nova_compute[189387]: 2025-11-26 23:39:50.438 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 26 23:39:50 compute-0 nova_compute[189387]: 2025-11-26 23:39:50.440 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:51 compute-0 nova_compute[189387]: 2025-11-26 23:39:51.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:51 compute-0 podman[249511]: 2025-11-26 23:39:51.791460123 +0000 UTC m=+0.088176848 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:39:53 compute-0 nova_compute[189387]: 2025-11-26 23:39:53.144 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:53 compute-0 nova_compute[189387]: 2025-11-26 23:39:53.144 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:39:53 compute-0 nova_compute[189387]: 2025-11-26 23:39:53.948 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:39:55 compute-0 nova_compute[189387]: 2025-11-26 23:39:55.438 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:55 compute-0 nova_compute[189387]: 2025-11-26 23:39:55.441 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:58 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:39:58.689 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:39:58 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:39:58.689 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:39:58 compute-0 nova_compute[189387]: 2025-11-26 23:39:58.690 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:39:58 compute-0 podman[249534]: 2025-11-26 23:39:58.820566204 +0000 UTC m=+0.102546892 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 26 23:39:59 compute-0 podman[203621]: time="2025-11-26T23:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:39:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:39:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Nov 26 23:40:00 compute-0 nova_compute[189387]: 2025-11-26 23:40:00.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:00 compute-0 nova_compute[189387]: 2025-11-26 23:40:00.440 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:00 compute-0 nova_compute[189387]: 2025-11-26 23:40:00.443 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:01 compute-0 openstack_network_exporter[205787]: ERROR   23:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:40:01 compute-0 openstack_network_exporter[205787]: ERROR   23:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:40:01 compute-0 openstack_network_exporter[205787]: ERROR   23:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:40:01 compute-0 openstack_network_exporter[205787]: ERROR   23:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:40:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:40:01 compute-0 openstack_network_exporter[205787]: ERROR   23:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:40:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:40:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:40:01.693 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:40:05 compute-0 nova_compute[189387]: 2025-11-26 23:40:05.444 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:05 compute-0 nova_compute[189387]: 2025-11-26 23:40:05.448 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:06 compute-0 podman[249554]: 2025-11-26 23:40:06.890459414 +0000 UTC m=+0.165502757 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:40:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:40:09.649 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:40:09.650 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:40:09.650 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:09 compute-0 podman[249580]: 2025-11-26 23:40:09.832391486 +0000 UTC m=+0.114338715 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, config_id=edpm, architecture=x86_64, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 26 23:40:09 compute-0 podman[249582]: 2025-11-26 23:40:09.834383249 +0000 UTC m=+0.106364832 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Nov 26 23:40:09 compute-0 podman[249588]: 2025-11-26 23:40:09.83931185 +0000 UTC m=+0.102030177 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Nov 26 23:40:09 compute-0 podman[249583]: 2025-11-26 23:40:09.847265912 +0000 UTC m=+0.113261416 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:40:09 compute-0 podman[249581]: 2025-11-26 23:40:09.865945479 +0000 UTC m=+0.139085424 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:40:10 compute-0 nova_compute[189387]: 2025-11-26 23:40:10.446 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:10 compute-0 nova_compute[189387]: 2025-11-26 23:40:10.451 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:15 compute-0 nova_compute[189387]: 2025-11-26 23:40:15.447 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:15 compute-0 nova_compute[189387]: 2025-11-26 23:40:15.453 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:18 compute-0 podman[249673]: 2025-11-26 23:40:18.798971637 +0000 UTC m=+0.089099932 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 23:40:20 compute-0 nova_compute[189387]: 2025-11-26 23:40:20.449 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:20 compute-0 nova_compute[189387]: 2025-11-26 23:40:20.455 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:22 compute-0 podman[249693]: 2025-11-26 23:40:22.825748029 +0000 UTC m=+0.117830307 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:40:25 compute-0 nova_compute[189387]: 2025-11-26 23:40:25.453 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:25 compute-0 nova_compute[189387]: 2025-11-26 23:40:25.456 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:29 compute-0 ovn_controller[97697]: 2025-11-26T23:40:29Z|00065|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Nov 26 23:40:29 compute-0 podman[203621]: time="2025-11-26T23:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:40:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:40:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Nov 26 23:40:29 compute-0 podman[249716]: 2025-11-26 23:40:29.848435681 +0000 UTC m=+0.131202464 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125)
Nov 26 23:40:30 compute-0 nova_compute[189387]: 2025-11-26 23:40:30.456 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:31 compute-0 openstack_network_exporter[205787]: ERROR   23:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:40:31 compute-0 openstack_network_exporter[205787]: ERROR   23:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:40:31 compute-0 openstack_network_exporter[205787]: ERROR   23:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:40:31 compute-0 openstack_network_exporter[205787]: ERROR   23:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:40:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:40:31 compute-0 openstack_network_exporter[205787]: ERROR   23:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:40:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:40:35 compute-0 nova_compute[189387]: 2025-11-26 23:40:35.460 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.847 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.847 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.848 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.851 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.854 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.855 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.856 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.857 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.858 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:40:36.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:40:37 compute-0 podman[249736]: 2025-11-26 23:40:37.894692722 +0000 UTC m=+0.182703816 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Nov 26 23:40:38 compute-0 nova_compute[189387]: 2025-11-26 23:40:38.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:38 compute-0 nova_compute[189387]: 2025-11-26 23:40:38.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:40:38 compute-0 nova_compute[189387]: 2025-11-26 23:40:38.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:40:38 compute-0 nova_compute[189387]: 2025-11-26 23:40:38.149 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:40:39 compute-0 nova_compute[189387]: 2025-11-26 23:40:39.419 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:39 compute-0 nova_compute[189387]: 2025-11-26 23:40:39.451 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:39 compute-0 nova_compute[189387]: 2025-11-26 23:40:39.703 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.163 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.165 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.464 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.641 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.642 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5384MB free_disk=72.38011169433594GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.642 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.643 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.716 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.716 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.745 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.767 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.768 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:40:40 compute-0 nova_compute[189387]: 2025-11-26 23:40:40.769 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:40 compute-0 podman[249764]: 2025-11-26 23:40:40.79181085 +0000 UTC m=+0.079534919 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:40:40 compute-0 podman[249763]: 2025-11-26 23:40:40.791971624 +0000 UTC m=+0.086941615 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, container_name=kepler, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc.)
Nov 26 23:40:40 compute-0 podman[249770]: 2025-11-26 23:40:40.808621437 +0000 UTC m=+0.088199879 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:40:40 compute-0 podman[249786]: 2025-11-26 23:40:40.835166644 +0000 UTC m=+0.085216180 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=edpm, name=ubi9-minimal, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Nov 26 23:40:40 compute-0 podman[249776]: 2025-11-26 23:40:40.847599025 +0000 UTC m=+0.114018557 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0)
Nov 26 23:40:41 compute-0 nova_compute[189387]: 2025-11-26 23:40:41.768 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:41 compute-0 nova_compute[189387]: 2025-11-26 23:40:41.769 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:40:43 compute-0 nova_compute[189387]: 2025-11-26 23:40:43.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:43 compute-0 nova_compute[189387]: 2025-11-26 23:40:43.595 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:43 compute-0 nova_compute[189387]: 2025-11-26 23:40:43.780 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:44 compute-0 nova_compute[189387]: 2025-11-26 23:40:44.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:45 compute-0 nova_compute[189387]: 2025-11-26 23:40:45.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:45 compute-0 nova_compute[189387]: 2025-11-26 23:40:45.467 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:47 compute-0 nova_compute[189387]: 2025-11-26 23:40:47.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:47 compute-0 nova_compute[189387]: 2025-11-26 23:40:47.513 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:47 compute-0 nova_compute[189387]: 2025-11-26 23:40:47.730 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:48 compute-0 nova_compute[189387]: 2025-11-26 23:40:48.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:49 compute-0 nova_compute[189387]: 2025-11-26 23:40:49.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:40:49 compute-0 podman[249862]: 2025-11-26 23:40:49.802252159 +0000 UTC m=+0.091519957 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:40:50 compute-0 nova_compute[189387]: 2025-11-26 23:40:50.471 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:53 compute-0 podman[249882]: 2025-11-26 23:40:53.796909685 +0000 UTC m=+0.085828395 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:40:54 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.716 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:54 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.717 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:54 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.737 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:40:54 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.873 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:54 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.874 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:54 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.886 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:40:54 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.887 189391 INFO nova.compute.claims [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:54.999 189391 DEBUG nova.compute.provider_tree [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.018 189391 DEBUG nova.scheduler.client.report [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.037 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.037 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.067 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.106 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.107 189391 DEBUG nova.network.neutron [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.128 189391 INFO nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.149 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.254 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.258 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.259 189391 INFO nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Creating image(s)#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.260 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "/var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.261 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "/var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.263 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "/var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.264 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.266 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.473 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:55 compute-0 nova_compute[189387]: 2025-11-26 23:40:55.605 189391 DEBUG nova.policy [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '357477a3688848b099ed3f5f61c71771', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cda1d63c3f9d4791a18030ebba1c1b11', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.698 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.795 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.part --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.798 189391 DEBUG nova.virt.images [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] 948c6d5b-0d46-4aec-8649-b6cdcb1a5694 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.801 189391 DEBUG nova.privsep.utils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.802 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.part /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.836 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.837 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.854 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.915 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.916 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.923 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:40:56 compute-0 nova_compute[189387]: 2025-11-26 23:40:56.923 189391 INFO nova.compute.claims [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.050 189391 DEBUG nova.compute.provider_tree [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.058 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.part /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.converted" returned: 0 in 0.255s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.062 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.082 189391 DEBUG nova.scheduler.client.report [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.104 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.105 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.143 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.144 189391 DEBUG nova.network.neutron [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.160 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685.converted --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.161 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.180 189391 INFO nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.184 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.204 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.262 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.262 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.263 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.274 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.291 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.293 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.294 189391 INFO nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Creating image(s)#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.294 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "/var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.295 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "/var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.296 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "/var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.308 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.331 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.332 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.374 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.375 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.375 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.401 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.402 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.403 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.413 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.462 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.463 189391 DEBUG nova.virt.disk.api [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Checking if we can resize image /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.464 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.494 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.495 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.558 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.560 189391 DEBUG nova.virt.disk.api [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Cannot resize image /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.561 189391 DEBUG nova.objects.instance [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lazy-loading 'migration_context' on Instance uuid 696e6032-d12c-4533-ae7c-c510dc917f0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.565 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk 1073741824" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.566 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.567 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.590 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.592 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Ensure instance console log exists: /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.593 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.594 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.594 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.668 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.670 189391 DEBUG nova.virt.disk.api [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Checking if we can resize image /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.671 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.704 189391 DEBUG nova.network.neutron [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Successfully created port: b2fce3d4-667e-40f1-8fad-b23b6e4286db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.747 189391 DEBUG nova.policy [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2ffd5a94272f4e6faf977bacb6cd544a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4bac30b9fde54025a33de2b34a9c54e4', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.766 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.767 189391 DEBUG nova.virt.disk.api [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Cannot resize image /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.768 189391 DEBUG nova.objects.instance [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 8feca651-47c9-4aa9-b922-3552759e013f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.783 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.783 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Ensure instance console log exists: /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.784 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.784 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:40:57 compute-0 nova_compute[189387]: 2025-11-26 23:40:57.785 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:40:58 compute-0 nova_compute[189387]: 2025-11-26 23:40:58.512 189391 DEBUG nova.network.neutron [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Successfully created port: c92ee6b2-3f41-4732-97c1-c31d830eb511 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:40:59 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:40:59.488 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:40:59 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:40:59.489 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:40:59 compute-0 nova_compute[189387]: 2025-11-26 23:40:59.489 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:40:59 compute-0 nova_compute[189387]: 2025-11-26 23:40:59.611 189391 DEBUG nova.network.neutron [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Successfully updated port: c92ee6b2-3f41-4732-97c1-c31d830eb511 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:40:59 compute-0 nova_compute[189387]: 2025-11-26 23:40:59.629 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:40:59 compute-0 nova_compute[189387]: 2025-11-26 23:40:59.630 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquired lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:40:59 compute-0 nova_compute[189387]: 2025-11-26 23:40:59.630 189391 DEBUG nova.network.neutron [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:40:59 compute-0 podman[203621]: time="2025-11-26T23:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:40:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:40:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Nov 26 23:40:59 compute-0 nova_compute[189387]: 2025-11-26 23:40:59.861 189391 DEBUG nova.network.neutron [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.168 189391 DEBUG nova.compute.manager [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-changed-c92ee6b2-3f41-4732-97c1-c31d830eb511 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.169 189391 DEBUG nova.compute.manager [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Refreshing instance network info cache due to event network-changed-c92ee6b2-3f41-4732-97c1-c31d830eb511. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.169 189391 DEBUG oslo_concurrency.lockutils [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.451 189391 DEBUG nova.network.neutron [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Successfully updated port: b2fce3d4-667e-40f1-8fad-b23b6e4286db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.468 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.469 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.470 189391 DEBUG nova.network.neutron [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.476 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.826 189391 DEBUG nova.network.neutron [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:41:00 compute-0 podman[249951]: 2025-11-26 23:41:00.835418768 +0000 UTC m=+0.114437377 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.861 189391 DEBUG nova.compute.manager [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.862 189391 DEBUG nova.compute.manager [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing instance network info cache due to event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:41:00 compute-0 nova_compute[189387]: 2025-11-26 23:41:00.863 189391 DEBUG oslo_concurrency.lockutils [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:01 compute-0 openstack_network_exporter[205787]: ERROR   23:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:41:01 compute-0 openstack_network_exporter[205787]: ERROR   23:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:41:01 compute-0 openstack_network_exporter[205787]: ERROR   23:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:41:01 compute-0 openstack_network_exporter[205787]: ERROR   23:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:41:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:41:01 compute-0 openstack_network_exporter[205787]: ERROR   23:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:41:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.523 189391 DEBUG nova.network.neutron [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Updating instance_info_cache with network_info: [{"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.565 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Releasing lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.566 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Instance network_info: |[{"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.567 189391 DEBUG oslo_concurrency.lockutils [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.568 189391 DEBUG nova.network.neutron [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Refreshing network info cache for port c92ee6b2-3f41-4732-97c1-c31d830eb511 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.573 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Start _get_guest_xml network_info=[{"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.593 189391 WARNING nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.605 189391 DEBUG nova.virt.libvirt.host [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.607 189391 DEBUG nova.virt.libvirt.host [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.613 189391 DEBUG nova.virt.libvirt.host [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.614 189391 DEBUG nova.virt.libvirt.host [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.615 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.615 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.616 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.617 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.617 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.618 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.618 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.619 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.619 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.620 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.620 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.621 189391 DEBUG nova.virt.hardware [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.629 189391 DEBUG nova.virt.libvirt.vif [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:40:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1059026894',display_name='tempest-ServersTestJSON-server-1059026894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1059026894',id=7,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6LDqS8NtXm7rO4cbfyRl0UwiXLVFy0BSNz5YHzCqgflhmlM1k6vMMlj08m2Lp4F/cmW40Xe3lUBo4GRQ/HDFQ2UOdK/42Fb4E6AO4M+rQSanHKxB2/n7D0EFuZOyVf7w==',key_name='tempest-keypair-2144774203',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bac30b9fde54025a33de2b34a9c54e4',ramdisk_id='',reservation_id='r-thervqxb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1608376607',owner_user_name='tempest-ServersTestJSON-1608376607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:40:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2ffd5a94272f4e6faf977bacb6cd544a',uuid=8feca651-47c9-4aa9-b922-3552759e013f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.630 189391 DEBUG nova.network.os_vif_util [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Converting VIF {"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.632 189391 DEBUG nova.network.os_vif_util [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:44:18,bridge_name='br-int',has_traffic_filtering=True,id=c92ee6b2-3f41-4732-97c1-c31d830eb511,network=Network(d179492f-9081-4ade-9309-d46e956ca91d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc92ee6b2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.633 189391 DEBUG nova.objects.instance [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8feca651-47c9-4aa9-b922-3552759e013f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.648 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <uuid>8feca651-47c9-4aa9-b922-3552759e013f</uuid>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <name>instance-00000007</name>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <nova:name>tempest-ServersTestJSON-server-1059026894</nova:name>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:41:01</nova:creationTime>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:user uuid="2ffd5a94272f4e6faf977bacb6cd544a">tempest-ServersTestJSON-1608376607-project-member</nova:user>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:project uuid="4bac30b9fde54025a33de2b34a9c54e4">tempest-ServersTestJSON-1608376607</nova:project>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        <nova:port uuid="c92ee6b2-3f41-4732-97c1-c31d830eb511">
Nov 26 23:41:01 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <system>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <entry name="serial">8feca651-47c9-4aa9-b922-3552759e013f</entry>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <entry name="uuid">8feca651-47c9-4aa9-b922-3552759e013f</entry>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </system>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <os>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  </os>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <features>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  </features>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk.config"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:cb:44:18"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <target dev="tapc92ee6b2-3f"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/console.log" append="off"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <video>
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </video>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:41:01 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:41:01 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:41:01 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:41:01 compute-0 nova_compute[189387]: </domain>
Nov 26 23:41:01 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.649 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Preparing to wait for external event network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.650 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.651 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.652 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.653 189391 DEBUG nova.virt.libvirt.vif [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:40:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1059026894',display_name='tempest-ServersTestJSON-server-1059026894',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1059026894',id=7,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6LDqS8NtXm7rO4cbfyRl0UwiXLVFy0BSNz5YHzCqgflhmlM1k6vMMlj08m2Lp4F/cmW40Xe3lUBo4GRQ/HDFQ2UOdK/42Fb4E6AO4M+rQSanHKxB2/n7D0EFuZOyVf7w==',key_name='tempest-keypair-2144774203',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4bac30b9fde54025a33de2b34a9c54e4',ramdisk_id='',reservation_id='r-thervqxb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1608376607',owner_user_name='tempest-ServersTestJSON-1608376607-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:40:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2ffd5a94272f4e6faf977bacb6cd544a',uuid=8feca651-47c9-4aa9-b922-3552759e013f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.653 189391 DEBUG nova.network.os_vif_util [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Converting VIF {"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.654 189391 DEBUG nova.network.os_vif_util [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:44:18,bridge_name='br-int',has_traffic_filtering=True,id=c92ee6b2-3f41-4732-97c1-c31d830eb511,network=Network(d179492f-9081-4ade-9309-d46e956ca91d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc92ee6b2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.655 189391 DEBUG os_vif [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:44:18,bridge_name='br-int',has_traffic_filtering=True,id=c92ee6b2-3f41-4732-97c1-c31d830eb511,network=Network(d179492f-9081-4ade-9309-d46e956ca91d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc92ee6b2-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.656 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.657 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.657 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.661 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.662 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc92ee6b2-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.662 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc92ee6b2-3f, col_values=(('external_ids', {'iface-id': 'c92ee6b2-3f41-4732-97c1-c31d830eb511', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:44:18', 'vm-uuid': '8feca651-47c9-4aa9-b922-3552759e013f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.664 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:01 compute-0 NetworkManager[56227]: <info>  [1764200461.6657] manager: (tapc92ee6b2-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.666 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.676 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.678 189391 INFO os_vif [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:44:18,bridge_name='br-int',has_traffic_filtering=True,id=c92ee6b2-3f41-4732-97c1-c31d830eb511,network=Network(d179492f-9081-4ade-9309-d46e956ca91d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc92ee6b2-3f')#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.738 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.738 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.739 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] No VIF found with MAC fa:16:3e:cb:44:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:41:01 compute-0 nova_compute[189387]: 2025-11-26 23:41:01.739 189391 INFO nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Using config drive#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.279 189391 INFO nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Creating config drive at /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk.config#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.291 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp015976fo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.331 189391 DEBUG nova.network.neutron [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.354 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.355 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Instance network_info: |[{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.355 189391 DEBUG oslo_concurrency.lockutils [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.356 189391 DEBUG nova.network.neutron [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.358 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Start _get_guest_xml network_info=[{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.368 189391 WARNING nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.378 189391 DEBUG nova.virt.libvirt.host [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.379 189391 DEBUG nova.virt.libvirt.host [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.383 189391 DEBUG nova.virt.libvirt.host [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.384 189391 DEBUG nova.virt.libvirt.host [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.384 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.385 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.386 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.386 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.387 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.387 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.388 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.388 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.389 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.389 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.390 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.390 189391 DEBUG nova.virt.hardware [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.396 189391 DEBUG nova.virt.libvirt.vif [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:40:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-518237589',display_name='tempest-AttachInterfacesUnderV243Test-server-518237589',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-518237589',id=6,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAeCacre+DpKbR9zR5rGfgdgg0OxLzmuU8CTtn4qnPlPeLMLpl9jSBZzyDL9JbVAxWJZsWYdBzTeeojuXVvs32m0Ze42+0Cdj57DGNt5DQ+xHdJMtxDqfVliNQonyhT4jw==',key_name='tempest-keypair-1706157709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cda1d63c3f9d4791a18030ebba1c1b11',ramdisk_id='',reservation_id='r-6l92ar4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1379565429',owner_user_name='tempest-AttachInterfacesUnderV243Test-1379565429-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:40:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='357477a3688848b099ed3f5f61c71771',uuid=696e6032-d12c-4533-ae7c-c510dc917f0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.396 189391 DEBUG nova.network.os_vif_util [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Converting VIF {"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.397 189391 DEBUG nova.network.os_vif_util [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:50:8a,bridge_name='br-int',has_traffic_filtering=True,id=b2fce3d4-667e-40f1-8fad-b23b6e4286db,network=Network(23864f37-12d9-4f3e-a0da-ef91c19406ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2fce3d4-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.399 189391 DEBUG nova.objects.instance [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lazy-loading 'pci_devices' on Instance uuid 696e6032-d12c-4533-ae7c-c510dc917f0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.422 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <uuid>696e6032-d12c-4533-ae7c-c510dc917f0a</uuid>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <name>instance-00000006</name>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-518237589</nova:name>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:41:02</nova:creationTime>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:user uuid="357477a3688848b099ed3f5f61c71771">tempest-AttachInterfacesUnderV243Test-1379565429-project-member</nova:user>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:project uuid="cda1d63c3f9d4791a18030ebba1c1b11">tempest-AttachInterfacesUnderV243Test-1379565429</nova:project>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        <nova:port uuid="b2fce3d4-667e-40f1-8fad-b23b6e4286db">
Nov 26 23:41:02 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <system>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <entry name="serial">696e6032-d12c-4533-ae7c-c510dc917f0a</entry>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <entry name="uuid">696e6032-d12c-4533-ae7c-c510dc917f0a</entry>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </system>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <os>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  </os>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <features>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  </features>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk.config"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:94:50:8a"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <target dev="tapb2fce3d4-66"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/console.log" append="off"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <video>
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </video>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:41:02 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:41:02 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:41:02 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:41:02 compute-0 nova_compute[189387]: </domain>
Nov 26 23:41:02 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.423 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Preparing to wait for external event network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.424 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.425 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.426 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.427 189391 DEBUG nova.virt.libvirt.vif [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:40:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-518237589',display_name='tempest-AttachInterfacesUnderV243Test-server-518237589',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-518237589',id=6,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAeCacre+DpKbR9zR5rGfgdgg0OxLzmuU8CTtn4qnPlPeLMLpl9jSBZzyDL9JbVAxWJZsWYdBzTeeojuXVvs32m0Ze42+0Cdj57DGNt5DQ+xHdJMtxDqfVliNQonyhT4jw==',key_name='tempest-keypair-1706157709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cda1d63c3f9d4791a18030ebba1c1b11',ramdisk_id='',reservation_id='r-6l92ar4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1379565429',owner_user_name='tempest-AttachInterfacesUnderV243Test-1379565429-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:40:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='357477a3688848b099ed3f5f61c71771',uuid=696e6032-d12c-4533-ae7c-c510dc917f0a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.428 189391 DEBUG nova.network.os_vif_util [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Converting VIF {"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.429 189391 DEBUG nova.network.os_vif_util [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:94:50:8a,bridge_name='br-int',has_traffic_filtering=True,id=b2fce3d4-667e-40f1-8fad-b23b6e4286db,network=Network(23864f37-12d9-4f3e-a0da-ef91c19406ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2fce3d4-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.431 189391 DEBUG os_vif [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:50:8a,bridge_name='br-int',has_traffic_filtering=True,id=b2fce3d4-667e-40f1-8fad-b23b6e4286db,network=Network(23864f37-12d9-4f3e-a0da-ef91c19406ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2fce3d4-66') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.433 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.434 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.435 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.438 189391 DEBUG oslo_concurrency.processutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp015976fo" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.449 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.450 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb2fce3d4-66, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.451 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb2fce3d4-66, col_values=(('external_ids', {'iface-id': 'b2fce3d4-667e-40f1-8fad-b23b6e4286db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:94:50:8a', 'vm-uuid': '696e6032-d12c-4533-ae7c-c510dc917f0a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.456 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 NetworkManager[56227]: <info>  [1764200462.4574] manager: (tapb2fce3d4-66): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.461 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.470 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.473 189391 INFO os_vif [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:94:50:8a,bridge_name='br-int',has_traffic_filtering=True,id=b2fce3d4-667e-40f1-8fad-b23b6e4286db,network=Network(23864f37-12d9-4f3e-a0da-ef91c19406ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2fce3d4-66')#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.532 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:41:02 compute-0 kernel: tapc92ee6b2-3f: entered promiscuous mode
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.533 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.534 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] No VIF found with MAC fa:16:3e:94:50:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.535 189391 INFO nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Using config drive#033[00m
Nov 26 23:41:02 compute-0 NetworkManager[56227]: <info>  [1764200462.5399] manager: (tapc92ee6b2-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.542 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 ovn_controller[97697]: 2025-11-26T23:41:02Z|00066|binding|INFO|Claiming lport c92ee6b2-3f41-4732-97c1-c31d830eb511 for this chassis.
Nov 26 23:41:02 compute-0 ovn_controller[97697]: 2025-11-26T23:41:02Z|00067|binding|INFO|c92ee6b2-3f41-4732-97c1-c31d830eb511: Claiming fa:16:3e:cb:44:18 10.100.0.10
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.555 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:44:18 10.100.0.10'], port_security=['fa:16:3e:cb:44:18 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8feca651-47c9-4aa9-b922-3552759e013f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d179492f-9081-4ade-9309-d46e956ca91d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bac30b9fde54025a33de2b34a9c54e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40a87626-8aed-4a1a-a337-d9fa8b4ebf44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e040bb05-9e06-4ab7-9cca-57e26ef22943, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=c92ee6b2-3f41-4732-97c1-c31d830eb511) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.556 106595 INFO neutron.agent.ovn.metadata.agent [-] Port c92ee6b2-3f41-4732-97c1-c31d830eb511 in datapath d179492f-9081-4ade-9309-d46e956ca91d bound to our chassis#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.558 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d179492f-9081-4ade-9309-d46e956ca91d#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.571 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[1d9ae1ed-ac49-4195-989f-2e1087fb8f64]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.571 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd179492f-91 in ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:41:02 compute-0 ovn_controller[97697]: 2025-11-26T23:41:02Z|00068|binding|INFO|Setting lport c92ee6b2-3f41-4732-97c1-c31d830eb511 ovn-installed in OVS
Nov 26 23:41:02 compute-0 ovn_controller[97697]: 2025-11-26T23:41:02Z|00069|binding|INFO|Setting lport c92ee6b2-3f41-4732-97c1-c31d830eb511 up in Southbound
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.573 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd179492f-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.573 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5cfb501c-6636-4741-aa72-6727614866fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.574 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[b0768c61-46cb-447f-b6be-eecb779d4686]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.579 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.590 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[c53447c6-c757-49dd-b73c-d63638b030ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 systemd-machined[155674]: New machine qemu-6-instance-00000007.
Nov 26 23:41:02 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000007.
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.614 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[99ba3d9e-3f2a-4845-862c-727bc0ceb186]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 systemd-udevd[250000]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:41:02 compute-0 NetworkManager[56227]: <info>  [1764200462.6447] device (tapc92ee6b2-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:41:02 compute-0 NetworkManager[56227]: <info>  [1764200462.6460] device (tapc92ee6b2-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.654 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e8f5d0-20bb-4f6a-8cac-3440adad2200]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.660 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbd80b3-3c2c-4f4d-9041-a3524a37a023]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 NetworkManager[56227]: <info>  [1764200462.6616] manager: (tapd179492f-90): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Nov 26 23:41:02 compute-0 systemd-udevd[250003]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.692 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[c1df1dde-ccc0-494a-b562-30e551a96a18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.696 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a9c72e5f-4409-4bda-ae84-50c854b35bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 NetworkManager[56227]: <info>  [1764200462.7264] device (tapd179492f-90): carrier: link connected
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.735 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e885d7ff-c8a1-4c4b-b37d-7dd30d7e571f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.754 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4f501778-692e-4797-8d75-cda91ddb0542]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd179492f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:3f:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514538, 'reachable_time': 44785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250032, 'error': None, 'target': 'ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.771 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[445b5d58-872b-4c92-bb8b-663e2902d85e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:3f61'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514538, 'tstamp': 514538}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250033, 'error': None, 'target': 'ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.798 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[3424a7db-0e2e-4baf-b9f1-17ca2af6f9e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd179492f-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:3f:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514538, 'reachable_time': 44785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250034, 'error': None, 'target': 'ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.840 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ecde73b0-bcd3-40e9-a826-18b850fd584f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.918 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf10c7f-b321-4b34-9500-2e141aa32eb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.920 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd179492f-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.921 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.921 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd179492f-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:02 compute-0 kernel: tapd179492f-90: entered promiscuous mode
Nov 26 23:41:02 compute-0 NetworkManager[56227]: <info>  [1764200462.9252] manager: (tapd179492f-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.924 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.935 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.936 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd179492f-90, col_values=(('external_ids', {'iface-id': '8597c58a-43c6-47b4-9e30-006eea2c4907'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:02 compute-0 ovn_controller[97697]: 2025-11-26T23:41:02Z|00070|binding|INFO|Releasing lport 8597c58a-43c6-47b4-9e30-006eea2c4907 from this chassis (sb_readonly=0)
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.938 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.968 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 nova_compute[189387]: 2025-11-26 23:41:02.979 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.980 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d179492f-9081-4ade-9309-d46e956ca91d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d179492f-9081-4ade-9309-d46e956ca91d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.981 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[b3be9895-fedb-40bf-a7d2-d22f8db99359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.983 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-d179492f-9081-4ade-9309-d46e956ca91d
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/d179492f-9081-4ade-9309-d46e956ca91d.pid.haproxy
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID d179492f-9081-4ade-9309-d46e956ca91d
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:41:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:02.984 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d', 'env', 'PROCESS_TAG=haproxy-d179492f-9081-4ade-9309-d46e956ca91d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d179492f-9081-4ade-9309-d46e956ca91d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.028 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200463.027174, 8feca651-47c9-4aa9-b922-3552759e013f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.028 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] VM Started (Lifecycle Event)#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.047 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.053 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200463.027336, 8feca651-47c9-4aa9-b922-3552759e013f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.053 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.075 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.083 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.101 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:41:03 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 23:41:03 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.475 189391 INFO nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Creating config drive at /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk.config#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.484 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupdqnyr0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:03 compute-0 podman[250097]: 2025-11-26 23:41:03.497872791 +0000 UTC m=+0.098227956 container create 156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:41:03 compute-0 podman[250097]: 2025-11-26 23:41:03.445904138 +0000 UTC m=+0.046259363 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:41:03 compute-0 systemd[1]: Started libpod-conmon-156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811.scope.
Nov 26 23:41:03 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0cc5cc526ff42d848e7089ed8f70d878bb8841289de5ff916d6144c9537c279/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.631 189391 DEBUG oslo_concurrency.processutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpupdqnyr0" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:03 compute-0 podman[250097]: 2025-11-26 23:41:03.648183133 +0000 UTC m=+0.248538278 container init 156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:41:03 compute-0 podman[250097]: 2025-11-26 23:41:03.659545655 +0000 UTC m=+0.259900780 container start 156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 26 23:41:03 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [NOTICE]   (250120) : New worker (250130) forked
Nov 26 23:41:03 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [NOTICE]   (250120) : Loading success.
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.720 189391 DEBUG nova.network.neutron [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Updated VIF entry in instance network info cache for port c92ee6b2-3f41-4732-97c1-c31d830eb511. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.720 189391 DEBUG nova.network.neutron [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Updating instance_info_cache with network_info: [{"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:03 compute-0 kernel: tapb2fce3d4-66: entered promiscuous mode
Nov 26 23:41:03 compute-0 systemd-udevd[250021]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:41:03 compute-0 NetworkManager[56227]: <info>  [1764200463.7377] manager: (tapb2fce3d4-66): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 26 23:41:03 compute-0 ovn_controller[97697]: 2025-11-26T23:41:03Z|00071|binding|INFO|Claiming lport b2fce3d4-667e-40f1-8fad-b23b6e4286db for this chassis.
Nov 26 23:41:03 compute-0 ovn_controller[97697]: 2025-11-26T23:41:03Z|00072|binding|INFO|b2fce3d4-667e-40f1-8fad-b23b6e4286db: Claiming fa:16:3e:94:50:8a 10.100.0.10
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.742 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.746 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:50:8a 10.100.0.10'], port_security=['fa:16:3e:94:50:8a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '696e6032-d12c-4533-ae7c-c510dc917f0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cda1d63c3f9d4791a18030ebba1c1b11', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2674a8ce-e68b-41b7-9c29-4c54411c5b16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e8838df-2918-44ef-8ded-da51293ac711, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=b2fce3d4-667e-40f1-8fad-b23b6e4286db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.747 106595 INFO neutron.agent.ovn.metadata.agent [-] Port b2fce3d4-667e-40f1-8fad-b23b6e4286db in datapath 23864f37-12d9-4f3e-a0da-ef91c19406ac bound to our chassis#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.750 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 23864f37-12d9-4f3e-a0da-ef91c19406ac#033[00m
Nov 26 23:41:03 compute-0 NetworkManager[56227]: <info>  [1764200463.7528] device (tapb2fce3d4-66): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.754 189391 DEBUG oslo_concurrency.lockutils [req-c8aed8f4-91db-4b58-85f4-32c0ad8f74db req-db8b2931-823f-4e97-b7ca-3dcf058e42d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:03 compute-0 ovn_controller[97697]: 2025-11-26T23:41:03Z|00073|binding|INFO|Setting lport b2fce3d4-667e-40f1-8fad-b23b6e4286db ovn-installed in OVS
Nov 26 23:41:03 compute-0 ovn_controller[97697]: 2025-11-26T23:41:03Z|00074|binding|INFO|Setting lport b2fce3d4-667e-40f1-8fad-b23b6e4286db up in Southbound
Nov 26 23:41:03 compute-0 NetworkManager[56227]: <info>  [1764200463.7632] device (tapb2fce3d4-66): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.762 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.762 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d93a8c7f-650a-44ac-ad17-366c414b2f21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.764 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap23864f37-11 in ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.766 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap23864f37-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.766 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c593e02f-4e92-4b73-946c-29d1195cf311]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.767 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.768 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[86886b1e-f7b9-46d8-bf10-f2e1c28fe62e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.782 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5a70de-a95f-4188-b12d-bce5705d5df3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 systemd-machined[155674]: New machine qemu-7-instance-00000006.
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.813 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[978bd17a-c581-47ff-809b-bbcd15fae450]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000006.
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.853 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[40e16a41-e5fb-459a-99af-223dd9e860a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.861 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[28aeb139-efc3-4bdc-98aa-05d442672903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 NetworkManager[56227]: <info>  [1764200463.8650] manager: (tap23864f37-10): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.886 189391 DEBUG nova.network.neutron [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updated VIF entry in instance network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.886 189391 DEBUG nova.network.neutron [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:03 compute-0 nova_compute[189387]: 2025-11-26 23:41:03.899 189391 DEBUG oslo_concurrency.lockutils [req-5b7ce230-9adb-48c5-9911-40702036ad36 req-40d0944a-85bc-429b-9ccf-d73f4e184829 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.910 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[84dd2af4-ebee-41b9-86e3-4f80d69a30cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.915 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2db231ab-4be6-4dd2-b774-14fdc9a77380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 NetworkManager[56227]: <info>  [1764200463.9474] device (tap23864f37-10): carrier: link connected
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.958 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[342bd6af-4d94-4113-8023-53c1e97c356e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:03.984 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[99ed13cd-2bda-4b8b-b2c2-e812727fc5fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23864f37-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:ce:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514660, 'reachable_time': 18540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250161, 'error': None, 'target': 'ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.008 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6478a6-88e0-47cf-aed5-28125099df2c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe15:ce7d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 514660, 'tstamp': 514660}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250162, 'error': None, 'target': 'ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.029 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bc9fa3f5-7469-4355-b126-189427ba4b1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap23864f37-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:15:ce:7d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514660, 'reachable_time': 18540, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250163, 'error': None, 'target': 'ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.077 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5ef52a57-6165-46b8-aecb-2e5eeab7cebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.171 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8d15a3-95f2-40e2-9490-463ceac85728]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.173 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23864f37-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.174 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.175 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23864f37-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:04 compute-0 NetworkManager[56227]: <info>  [1764200464.1782] manager: (tap23864f37-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.177 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.181 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:04 compute-0 kernel: tap23864f37-10: entered promiscuous mode
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.185 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap23864f37-10, col_values=(('external_ids', {'iface-id': '779990b0-f58d-4df2-b9a7-48b5134f6ea9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.187 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:04 compute-0 ovn_controller[97697]: 2025-11-26T23:41:04Z|00075|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.190 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.190 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/23864f37-12d9-4f3e-a0da-ef91c19406ac.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/23864f37-12d9-4f3e-a0da-ef91c19406ac.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.191 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c78d122a-8c01-4c8a-9e44-e03295fc0bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.191 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-23864f37-12d9-4f3e-a0da-ef91c19406ac
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/23864f37-12d9-4f3e-a0da-ef91c19406ac.pid.haproxy
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID 23864f37-12d9-4f3e-a0da-ef91c19406ac
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:41:04 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:04.192 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'env', 'PROCESS_TAG=haproxy-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/23864f37-12d9-4f3e-a0da-ef91c19406ac.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.214 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.267 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200464.2665138, 696e6032-d12c-4533-ae7c-c510dc917f0a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.268 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] VM Started (Lifecycle Event)#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.302 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.311 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200464.2667544, 696e6032-d12c-4533-ae7c-c510dc917f0a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.311 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.340 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.348 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.376 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.565 189391 DEBUG nova.compute.manager [req-94a7f30d-ab4e-43d5-b35c-8d1a8c6f77c0 req-4548ed81-df7a-4bbf-9b6a-db046b533072 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.566 189391 DEBUG oslo_concurrency.lockutils [req-94a7f30d-ab4e-43d5-b35c-8d1a8c6f77c0 req-4548ed81-df7a-4bbf-9b6a-db046b533072 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.567 189391 DEBUG oslo_concurrency.lockutils [req-94a7f30d-ab4e-43d5-b35c-8d1a8c6f77c0 req-4548ed81-df7a-4bbf-9b6a-db046b533072 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.567 189391 DEBUG oslo_concurrency.lockutils [req-94a7f30d-ab4e-43d5-b35c-8d1a8c6f77c0 req-4548ed81-df7a-4bbf-9b6a-db046b533072 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.568 189391 DEBUG nova.compute.manager [req-94a7f30d-ab4e-43d5-b35c-8d1a8c6f77c0 req-4548ed81-df7a-4bbf-9b6a-db046b533072 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Processing event network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.569 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.575 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200464.5748324, 8feca651-47c9-4aa9-b922-3552759e013f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.576 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.580 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.588 189391 INFO nova.virt.libvirt.driver [-] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Instance spawned successfully.#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.589 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.611 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.624 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.633 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.634 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.635 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.635 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.636 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.637 189391 DEBUG nova.virt.libvirt.driver [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.649 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:41:04 compute-0 podman[250200]: 2025-11-26 23:41:04.688216562 +0000 UTC m=+0.070424165 container create 2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.696 189391 INFO nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Took 7.40 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.697 189391 DEBUG nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:04 compute-0 systemd[1]: Started libpod-conmon-2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2.scope.
Nov 26 23:41:04 compute-0 podman[250200]: 2025-11-26 23:41:04.656316833 +0000 UTC m=+0.038524496 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:41:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.767 189391 INFO nova.compute.manager [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Took 7.87 seconds to build instance.#033[00m
Nov 26 23:41:04 compute-0 nova_compute[189387]: 2025-11-26 23:41:04.781 189391 DEBUG oslo_concurrency.lockutils [None req-005229ac-22c9-402f-8f6c-9234a9c5d709 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/974ab5427ddd94e7ee1765db7fec224f9831ead7105318c16f469b33895d6b48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:41:04 compute-0 podman[250200]: 2025-11-26 23:41:04.803760589 +0000 UTC m=+0.185968282 container init 2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:41:04 compute-0 podman[250200]: 2025-11-26 23:41:04.810687372 +0000 UTC m=+0.192895005 container start 2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:41:04 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [NOTICE]   (250218) : New worker (250220) forked
Nov 26 23:41:04 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [NOTICE]   (250218) : Loading success.
Nov 26 23:41:05 compute-0 nova_compute[189387]: 2025-11-26 23:41:05.479 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.765 189391 DEBUG nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.767 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.767 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.767 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.768 189391 DEBUG nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] No waiting events found dispatching network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.768 189391 WARNING nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received unexpected event network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.769 189391 DEBUG nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.769 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.769 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.770 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.770 189391 DEBUG nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Processing event network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.771 189391 DEBUG nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.771 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.771 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.772 189391 DEBUG oslo_concurrency.lockutils [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.772 189391 DEBUG nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] No waiting events found dispatching network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.772 189391 WARNING nova.compute.manager [req-6fd0788c-6d13-4b62-a6f7-e414b462e294 req-eb641e62-3fbf-4cb6-8ad9-84c838312127 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received unexpected event network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db for instance with vm_state building and task_state spawning.#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.773 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.777 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200466.77761, 696e6032-d12c-4533-ae7c-c510dc917f0a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.778 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.782 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.788 189391 INFO nova.virt.libvirt.driver [-] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Instance spawned successfully.#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.788 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.806 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.814 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.821 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.821 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.822 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.823 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.823 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.825 189391 DEBUG nova.virt.libvirt.driver [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.832 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.880 189391 INFO nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Took 11.62 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.880 189391 DEBUG nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:06 compute-0 ovn_controller[97697]: 2025-11-26T23:41:06Z|00076|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:06 compute-0 ovn_controller[97697]: 2025-11-26T23:41:06Z|00077|binding|INFO|Releasing lport 8597c58a-43c6-47b4-9e30-006eea2c4907 from this chassis (sb_readonly=0)
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.946 189391 INFO nova.compute.manager [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Took 12.13 seconds to build instance.#033[00m
Nov 26 23:41:06 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.975 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:07 compute-0 nova_compute[189387]: 2025-11-26 23:41:06.995 189391 DEBUG oslo_concurrency.lockutils [None req-9a7cc66c-6855-44f1-af35-5c33ba7ab977 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:07 compute-0 ovn_controller[97697]: 2025-11-26T23:41:07Z|00078|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:07 compute-0 ovn_controller[97697]: 2025-11-26T23:41:07Z|00079|binding|INFO|Releasing lport 8597c58a-43c6-47b4-9e30-006eea2c4907 from this chassis (sb_readonly=0)
Nov 26 23:41:07 compute-0 nova_compute[189387]: 2025-11-26 23:41:07.125 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:07 compute-0 nova_compute[189387]: 2025-11-26 23:41:07.457 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:08.496 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:08 compute-0 NetworkManager[56227]: <info>  [1764200468.5369] manager: (patch-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 26 23:41:08 compute-0 NetworkManager[56227]: <info>  [1764200468.5505] manager: (patch-br-int-to-provnet-c9d942ea-ad4b-46cc-9d84-38b9cfb3db21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Nov 26 23:41:08 compute-0 nova_compute[189387]: 2025-11-26 23:41:08.545 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:08 compute-0 nova_compute[189387]: 2025-11-26 23:41:08.628 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:08 compute-0 ovn_controller[97697]: 2025-11-26T23:41:08Z|00080|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:08 compute-0 ovn_controller[97697]: 2025-11-26T23:41:08Z|00081|binding|INFO|Releasing lport 8597c58a-43c6-47b4-9e30-006eea2c4907 from this chassis (sb_readonly=0)
Nov 26 23:41:08 compute-0 nova_compute[189387]: 2025-11-26 23:41:08.657 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:08 compute-0 podman[250232]: 2025-11-26 23:41:08.869315353 +0000 UTC m=+0.165941578 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.591 189391 DEBUG nova.compute.manager [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-changed-c92ee6b2-3f41-4732-97c1-c31d830eb511 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.592 189391 DEBUG nova.compute.manager [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Refreshing instance network info cache due to event network-changed-c92ee6b2-3f41-4732-97c1-c31d830eb511. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.593 189391 DEBUG oslo_concurrency.lockutils [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.593 189391 DEBUG oslo_concurrency.lockutils [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.593 189391 DEBUG nova.network.neutron [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Refreshing network info cache for port c92ee6b2-3f41-4732-97c1-c31d830eb511 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:41:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:09.650 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:09.651 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:09.652 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.734 189391 DEBUG nova.compute.manager [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.735 189391 DEBUG nova.compute.manager [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing instance network info cache due to event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.735 189391 DEBUG oslo_concurrency.lockutils [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.735 189391 DEBUG oslo_concurrency.lockutils [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:09 compute-0 nova_compute[189387]: 2025-11-26 23:41:09.735 189391 DEBUG nova.network.neutron [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:41:10 compute-0 ovn_controller[97697]: 2025-11-26T23:41:10Z|00082|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:10 compute-0 ovn_controller[97697]: 2025-11-26T23:41:10Z|00083|binding|INFO|Releasing lport 8597c58a-43c6-47b4-9e30-006eea2c4907 from this chassis (sb_readonly=0)
Nov 26 23:41:10 compute-0 nova_compute[189387]: 2025-11-26 23:41:10.454 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:10 compute-0 nova_compute[189387]: 2025-11-26 23:41:10.482 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.503 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.503 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.504 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.505 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.506 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.509 189391 INFO nova.compute.manager [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Terminating instance#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.511 189391 DEBUG nova.compute.manager [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:41:11 compute-0 kernel: tapc92ee6b2-3f (unregistering): left promiscuous mode
Nov 26 23:41:11 compute-0 NetworkManager[56227]: <info>  [1764200471.5724] device (tapc92ee6b2-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.579 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:11 compute-0 ovn_controller[97697]: 2025-11-26T23:41:11Z|00084|binding|INFO|Releasing lport c92ee6b2-3f41-4732-97c1-c31d830eb511 from this chassis (sb_readonly=0)
Nov 26 23:41:11 compute-0 ovn_controller[97697]: 2025-11-26T23:41:11Z|00085|binding|INFO|Setting lport c92ee6b2-3f41-4732-97c1-c31d830eb511 down in Southbound
Nov 26 23:41:11 compute-0 ovn_controller[97697]: 2025-11-26T23:41:11Z|00086|binding|INFO|Removing iface tapc92ee6b2-3f ovn-installed in OVS
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.601 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:11.606 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:44:18 10.100.0.10'], port_security=['fa:16:3e:cb:44:18 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '8feca651-47c9-4aa9-b922-3552759e013f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d179492f-9081-4ade-9309-d46e956ca91d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4bac30b9fde54025a33de2b34a9c54e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40a87626-8aed-4a1a-a337-d9fa8b4ebf44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e040bb05-9e06-4ab7-9cca-57e26ef22943, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=c92ee6b2-3f41-4732-97c1-c31d830eb511) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:41:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:11.608 106595 INFO neutron.agent.ovn.metadata.agent [-] Port c92ee6b2-3f41-4732-97c1-c31d830eb511 in datapath d179492f-9081-4ade-9309-d46e956ca91d unbound from our chassis#033[00m
Nov 26 23:41:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:11.611 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d179492f-9081-4ade-9309-d46e956ca91d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:41:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:11.613 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[2320b555-e20b-4925-908b-bb46924a48aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:11.617 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d namespace which is not needed anymore#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.626 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:11 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 26 23:41:11 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000007.scope: Consumed 7.685s CPU time.
Nov 26 23:41:11 compute-0 systemd-machined[155674]: Machine qemu-6-instance-00000007 terminated.
Nov 26 23:41:11 compute-0 podman[250255]: 2025-11-26 23:41:11.746236164 +0000 UTC m=+0.157227986 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler)
Nov 26 23:41:11 compute-0 podman[250274]: 2025-11-26 23:41:11.789847225 +0000 UTC m=+0.148677319 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:41:11 compute-0 podman[250258]: 2025-11-26 23:41:11.790609606 +0000 UTC m=+0.195144977 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:41:11 compute-0 podman[250276]: 2025-11-26 23:41:11.793490232 +0000 UTC m=+0.141459727 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.expose-services=, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Nov 26 23:41:11 compute-0 podman[250265]: 2025-11-26 23:41:11.808123731 +0000 UTC m=+0.159735823 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.813 189391 INFO nova.virt.libvirt.driver [-] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Instance destroyed successfully.#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.814 189391 DEBUG nova.objects.instance [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lazy-loading 'resources' on Instance uuid 8feca651-47c9-4aa9-b922-3552759e013f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.835 189391 DEBUG nova.virt.libvirt.vif [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:40:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1059026894',display_name='tempest-ServersTestJSON-server-1059026894',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1059026894',id=7,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBN6LDqS8NtXm7rO4cbfyRl0UwiXLVFy0BSNz5YHzCqgflhmlM1k6vMMlj08m2Lp4F/cmW40Xe3lUBo4GRQ/HDFQ2UOdK/42Fb4E6AO4M+rQSanHKxB2/n7D0EFuZOyVf7w==',key_name='tempest-keypair-2144774203',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:41:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4bac30b9fde54025a33de2b34a9c54e4',ramdisk_id='',reservation_id='r-thervqxb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1608376607',owner_user_name='tempest-ServersTestJSON-1608376607-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:41:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2ffd5a94272f4e6faf977bacb6cd544a',uuid=8feca651-47c9-4aa9-b922-3552759e013f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.836 189391 DEBUG nova.network.os_vif_util [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Converting VIF {"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.837 189391 DEBUG nova.network.os_vif_util [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:44:18,bridge_name='br-int',has_traffic_filtering=True,id=c92ee6b2-3f41-4732-97c1-c31d830eb511,network=Network(d179492f-9081-4ade-9309-d46e956ca91d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc92ee6b2-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.837 189391 DEBUG os_vif [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:44:18,bridge_name='br-int',has_traffic_filtering=True,id=c92ee6b2-3f41-4732-97c1-c31d830eb511,network=Network(d179492f-9081-4ade-9309-d46e956ca91d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc92ee6b2-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.839 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.840 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc92ee6b2-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.842 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.843 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.846 189391 INFO os_vif [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:44:18,bridge_name='br-int',has_traffic_filtering=True,id=c92ee6b2-3f41-4732-97c1-c31d830eb511,network=Network(d179492f-9081-4ade-9309-d46e956ca91d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc92ee6b2-3f')#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.847 189391 INFO nova.virt.libvirt.driver [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Deleting instance files /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f_del#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.848 189391 INFO nova.virt.libvirt.driver [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Deletion of /var/lib/nova/instances/8feca651-47c9-4aa9-b922-3552759e013f_del complete#033[00m
Nov 26 23:41:11 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [NOTICE]   (250120) : haproxy version is 2.8.14-c23fe91
Nov 26 23:41:11 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [NOTICE]   (250120) : path to executable is /usr/sbin/haproxy
Nov 26 23:41:11 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [WARNING]  (250120) : Exiting Master process...
Nov 26 23:41:11 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [WARNING]  (250120) : Exiting Master process...
Nov 26 23:41:11 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [ALERT]    (250120) : Current worker (250130) exited with code 143 (Terminated)
Nov 26 23:41:11 compute-0 neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d[250114]: [WARNING]  (250120) : All workers exited. Exiting... (0)
Nov 26 23:41:11 compute-0 systemd[1]: libpod-156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811.scope: Deactivated successfully.
Nov 26 23:41:11 compute-0 podman[250373]: 2025-11-26 23:41:11.867208604 +0000 UTC m=+0.087945522 container died 156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0cc5cc526ff42d848e7089ed8f70d878bb8841289de5ff916d6144c9537c279-merged.mount: Deactivated successfully.
Nov 26 23:41:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811-userdata-shm.mount: Deactivated successfully.
Nov 26 23:41:11 compute-0 podman[250373]: 2025-11-26 23:41:11.917069602 +0000 UTC m=+0.137806500 container cleanup 156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:41:11 compute-0 systemd[1]: libpod-conmon-156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811.scope: Deactivated successfully.
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.955 189391 INFO nova.compute.manager [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.956 189391 DEBUG oslo.service.loopingcall [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.956 189391 DEBUG nova.compute.manager [-] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:41:11 compute-0 nova_compute[189387]: 2025-11-26 23:41:11.957 189391 DEBUG nova.network.neutron [-] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.007 189391 DEBUG nova.compute.manager [req-ff8cfb94-22c8-41e8-bdfc-9a9b1c10bca5 req-7b3c241c-9a1e-44b3-be3a-0375bbbfa3bb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-vif-unplugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.008 189391 DEBUG oslo_concurrency.lockutils [req-ff8cfb94-22c8-41e8-bdfc-9a9b1c10bca5 req-7b3c241c-9a1e-44b3-be3a-0375bbbfa3bb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.008 189391 DEBUG oslo_concurrency.lockutils [req-ff8cfb94-22c8-41e8-bdfc-9a9b1c10bca5 req-7b3c241c-9a1e-44b3-be3a-0375bbbfa3bb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.009 189391 DEBUG oslo_concurrency.lockutils [req-ff8cfb94-22c8-41e8-bdfc-9a9b1c10bca5 req-7b3c241c-9a1e-44b3-be3a-0375bbbfa3bb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.009 189391 DEBUG nova.compute.manager [req-ff8cfb94-22c8-41e8-bdfc-9a9b1c10bca5 req-7b3c241c-9a1e-44b3-be3a-0375bbbfa3bb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] No waiting events found dispatching network-vif-unplugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.009 189391 DEBUG nova.compute.manager [req-ff8cfb94-22c8-41e8-bdfc-9a9b1c10bca5 req-7b3c241c-9a1e-44b3-be3a-0375bbbfa3bb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-vif-unplugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:41:12 compute-0 podman[250418]: 2025-11-26 23:41:12.012239866 +0000 UTC m=+0.067529349 container remove 156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.021 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac6e95b-9c14-4b81-96a2-e13767211fbf]: (4, ('Wed Nov 26 11:41:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d (156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811)\n156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811\nWed Nov 26 11:41:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d (156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811)\n156afb05434a419ecfcf6c82637abab3328ae2e7fcf3650fd52c51a21defa811\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.022 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[12b530b6-d632-4641-92db-8220b898b4e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.023 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd179492f-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.024 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:12 compute-0 kernel: tapd179492f-90: left promiscuous mode
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.027 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.035 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c15c6ad7-75d0-47e6-8dec-bc90025b3783]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.046 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.066 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[18921ad6-cd7e-4e98-ac5b-dfd3c8d1204f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.067 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bb312269-3207-4daa-b5d5-ed1384970dbe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.088 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[102e1075-eb43-4a49-999b-72e6861f2bb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514529, 'reachable_time': 40727, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250432, 'error': None, 'target': 'ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.095 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d179492f-9081-4ade-9309-d46e956ca91d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:41:12 compute-0 systemd[1]: run-netns-ovnmeta\x2dd179492f\x2d9081\x2d4ade\x2d9309\x2dd46e956ca91d.mount: Deactivated successfully.
Nov 26 23:41:12 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:12.095 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[a1575e7d-984e-48a3-ab29-e238cb5a672e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.842 189391 DEBUG nova.network.neutron [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Updated VIF entry in instance network info cache for port c92ee6b2-3f41-4732-97c1-c31d830eb511. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.844 189391 DEBUG nova.network.neutron [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Updating instance_info_cache with network_info: [{"id": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "address": "fa:16:3e:cb:44:18", "network": {"id": "d179492f-9081-4ade-9309-d46e956ca91d", "bridge": "br-int", "label": "tempest-ServersTestJSON-1354841299-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4bac30b9fde54025a33de2b34a9c54e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc92ee6b2-3f", "ovs_interfaceid": "c92ee6b2-3f41-4732-97c1-c31d830eb511", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.874 189391 DEBUG oslo_concurrency.lockutils [req-c1798fd8-3626-4a94-8c7e-3956e39ce8b2 req-f42d053e-72c3-4006-b90f-2a5854817a11 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-8feca651-47c9-4aa9-b922-3552759e013f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.934 189391 DEBUG nova.network.neutron [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updated VIF entry in instance network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.936 189391 DEBUG nova.network.neutron [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:12 compute-0 nova_compute[189387]: 2025-11-26 23:41:12.951 189391 DEBUG oslo_concurrency.lockutils [req-f29e26cd-861b-475d-9fc8-faa38b595fec req-004d3280-3a0f-431d-8911-2fb864f5c020 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.558 189391 DEBUG nova.network.neutron [-] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.592 189391 INFO nova.compute.manager [-] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Took 1.64 seconds to deallocate network for instance.#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.648 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.649 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.749 189391 DEBUG nova.compute.provider_tree [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.765 189391 DEBUG nova.scheduler.client.report [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.794 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.821 189391 INFO nova.scheduler.client.report [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Deleted allocations for instance 8feca651-47c9-4aa9-b922-3552759e013f#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.845 189391 DEBUG nova.compute.manager [req-bf1fb50e-7c69-49dc-83c0-1c8012ef5a57 req-b27d3075-c15d-426b-a161-eb83dcaa6245 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-vif-deleted-c92ee6b2-3f41-4732-97c1-c31d830eb511 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:13 compute-0 nova_compute[189387]: 2025-11-26 23:41:13.904 189391 DEBUG oslo_concurrency.lockutils [None req-6da4e5be-7cc3-49f4-9ea9-f4c9addb4946 2ffd5a94272f4e6faf977bacb6cd544a 4bac30b9fde54025a33de2b34a9c54e4 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:14 compute-0 nova_compute[189387]: 2025-11-26 23:41:14.160 189391 DEBUG nova.compute.manager [req-236e79b0-0e42-4c77-a778-91211cb490c7 req-65793a85-07d3-4bf8-bd8f-3e0174f81cbd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received event network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:14 compute-0 nova_compute[189387]: 2025-11-26 23:41:14.161 189391 DEBUG oslo_concurrency.lockutils [req-236e79b0-0e42-4c77-a778-91211cb490c7 req-65793a85-07d3-4bf8-bd8f-3e0174f81cbd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8feca651-47c9-4aa9-b922-3552759e013f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:14 compute-0 nova_compute[189387]: 2025-11-26 23:41:14.162 189391 DEBUG oslo_concurrency.lockutils [req-236e79b0-0e42-4c77-a778-91211cb490c7 req-65793a85-07d3-4bf8-bd8f-3e0174f81cbd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:14 compute-0 nova_compute[189387]: 2025-11-26 23:41:14.162 189391 DEBUG oslo_concurrency.lockutils [req-236e79b0-0e42-4c77-a778-91211cb490c7 req-65793a85-07d3-4bf8-bd8f-3e0174f81cbd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8feca651-47c9-4aa9-b922-3552759e013f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:14 compute-0 nova_compute[189387]: 2025-11-26 23:41:14.163 189391 DEBUG nova.compute.manager [req-236e79b0-0e42-4c77-a778-91211cb490c7 req-65793a85-07d3-4bf8-bd8f-3e0174f81cbd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] No waiting events found dispatching network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:41:14 compute-0 nova_compute[189387]: 2025-11-26 23:41:14.164 189391 WARNING nova.compute.manager [req-236e79b0-0e42-4c77-a778-91211cb490c7 req-65793a85-07d3-4bf8-bd8f-3e0174f81cbd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Received unexpected event network-vif-plugged-c92ee6b2-3f41-4732-97c1-c31d830eb511 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:41:15 compute-0 nova_compute[189387]: 2025-11-26 23:41:15.485 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:16 compute-0 nova_compute[189387]: 2025-11-26 23:41:16.843 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:18 compute-0 ovn_controller[97697]: 2025-11-26T23:41:18Z|00087|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:18 compute-0 nova_compute[189387]: 2025-11-26 23:41:18.531 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:20 compute-0 nova_compute[189387]: 2025-11-26 23:41:20.138 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:20 compute-0 nova_compute[189387]: 2025-11-26 23:41:20.489 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:20 compute-0 podman[250434]: 2025-11-26 23:41:20.773982855 +0000 UTC m=+0.074802623 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Nov 26 23:41:21 compute-0 nova_compute[189387]: 2025-11-26 23:41:21.846 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:24 compute-0 podman[250453]: 2025-11-26 23:41:24.819383023 +0000 UTC m=+0.104698008 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:41:25 compute-0 nova_compute[189387]: 2025-11-26 23:41:25.491 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:26 compute-0 nova_compute[189387]: 2025-11-26 23:41:26.803 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200471.801239, 8feca651-47c9-4aa9-b922-3552759e013f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:26 compute-0 nova_compute[189387]: 2025-11-26 23:41:26.804 189391 INFO nova.compute.manager [-] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:41:26 compute-0 nova_compute[189387]: 2025-11-26 23:41:26.820 189391 DEBUG nova.compute.manager [None req-97b4c00e-de3f-40ea-8d42-13035830954c - - - - - -] [instance: 8feca651-47c9-4aa9-b922-3552759e013f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:26 compute-0 nova_compute[189387]: 2025-11-26 23:41:26.848 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:28 compute-0 nova_compute[189387]: 2025-11-26 23:41:28.278 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:28 compute-0 nova_compute[189387]: 2025-11-26 23:41:28.507 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:29 compute-0 podman[203621]: time="2025-11-26T23:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:41:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:41:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 26 23:41:30 compute-0 nova_compute[189387]: 2025-11-26 23:41:30.495 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:31 compute-0 openstack_network_exporter[205787]: ERROR   23:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:41:31 compute-0 openstack_network_exporter[205787]: ERROR   23:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:41:31 compute-0 openstack_network_exporter[205787]: ERROR   23:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:41:31 compute-0 openstack_network_exporter[205787]: ERROR   23:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:41:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:41:31 compute-0 openstack_network_exporter[205787]: ERROR   23:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:41:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:41:31 compute-0 podman[250477]: 2025-11-26 23:41:31.826125029 +0000 UTC m=+0.115064964 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible)
Nov 26 23:41:31 compute-0 nova_compute[189387]: 2025-11-26 23:41:31.851 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:35 compute-0 nova_compute[189387]: 2025-11-26 23:41:35.498 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:35 compute-0 nova_compute[189387]: 2025-11-26 23:41:35.806 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:36 compute-0 nova_compute[189387]: 2025-11-26 23:41:36.853 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00088|inc_proc_eng|INFO|node: logical_flow_output, handler for input SB_logical_flow took 1007ms
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00089|timeval|WARN|Unreasonably long 1008ms poll interval (503ms user, 479ms system)
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00090|timeval|WARN|context switches: 0 voluntary, 11 involuntary
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00091|coverage|INFO|Event coverage, avg rate over last: 5 seconds, last minute, last hour,  hash=86d598d4:
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00092|coverage|INFO|vconn_sent                68.6/sec    83.133/sec        1.9281/sec   total: 7132
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00093|coverage|INFO|vconn_received             2.4/sec     3.033/sec        0.1422/sec   total: 518
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00094|coverage|INFO|vconn_open                 0.0/sec     0.000/sec        0.0011/sec   total: 4
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00095|coverage|INFO|util_xalloc              20122.4/sec 20842.683/sec      517.5244/sec   total: 1925338
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00096|coverage|INFO|unixctl_replied            0.0/sec     0.067/sec        0.0383/sec   total: 139
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00097|coverage|INFO|unixctl_received           0.0/sec     0.067/sec        0.0383/sec   total: 139
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00098|coverage|INFO|stream_open                0.0/sec     0.000/sec        0.0019/sec   total: 7
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00099|coverage|INFO|pstream_open               0.0/sec     0.000/sec        0.0003/sec   total: 1
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00100|coverage|INFO|seq_change               217.8/sec    91.633/sec        2.8389/sec   total: 10698
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00101|coverage|INFO|rconn_sent                68.6/sec    83.133/sec        1.9269/sec   total: 7128
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00102|coverage|INFO|rconn_queued              68.6/sec    83.133/sec        1.9269/sec   total: 7128
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00103|coverage|INFO|poll_zero_timeout          0.0/sec     0.167/sec        0.0178/sec   total: 64
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00104|coverage|INFO|poll_create_node         477.6/sec   231.800/sec        7.7783/sec   total: 29097
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00105|coverage|INFO|txn_success                0.4/sec     0.517/sec        0.0239/sec   total: 87
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00106|coverage|INFO|txn_incomplete             0.4/sec     0.717/sec        0.0389/sec   total: 141
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00107|coverage|INFO|txn_unchanged             13.6/sec    15.183/sec        0.6217/sec   total: 2283
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00108|coverage|INFO|hmap_reserve              14.0/sec    14.133/sec        0.6847/sec   total: 2511
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00109|coverage|INFO|hmap_shrink                0.0/sec     0.000/sec        0.0006/sec   total: 2
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00110|coverage|INFO|hmap_expand              279.4/sec   333.983/sec        9.7394/sec   total: 35859
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00111|coverage|INFO|hmap_pathological          1.8/sec     3.117/sec        0.0708/sec   total: 270
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00112|coverage|INFO|miniflow_malloc          709.6/sec   777.367/sec       15.6836/sec   total: 59373
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00113|coverage|INFO|flow_extract               0.0/sec     0.000/sec        0.0100/sec   total: 36
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00114|coverage|INFO|physical_run               0.8/sec     0.833/sec        0.0178/sec   total: 66
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00115|coverage|INFO|pinctrl_total_pin_pkts     0.0/sec     0.000/sec        0.0100/sec   total: 36
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00116|coverage|INFO|pinctrl_notify_main_thread   0.0/sec     0.000/sec        0.0011/sec   total: 4
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00117|coverage|INFO|lflow_conj_free            0.0/sec     0.083/sec        0.0047/sec   total: 17
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00118|coverage|INFO|lflow_conj_alloc           0.0/sec     0.283/sec        0.0119/sec   total: 43
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00119|coverage|INFO|lflow_cache_trim           0.0/sec     0.017/sec        0.0022/sec   total: 8
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00120|coverage|INFO|lflow_cache_delete        77.8/sec    65.883/sec        1.1625/sec   total: 4459
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00121|coverage|INFO|lflow_cache_miss         123.0/sec   121.650/sec        2.4061/sec   total: 9126
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00122|coverage|INFO|lflow_cache_hit          297.4/sec   357.350/sec        6.9411/sec   total: 26310
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00123|coverage|INFO|lflow_cache_add           90.0/sec    69.717/sec        1.2817/sec   total: 4922
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00124|coverage|INFO|lflow_cache_free_matches  76.8/sec    62.017/sec        1.0686/sec   total: 4116
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00125|coverage|INFO|lflow_cache_free_expr      1.0/sec     3.867/sec        0.0939/sec   total: 343
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00126|coverage|INFO|lflow_cache_add_matches   79.8/sec    63.600/sec        1.1353/sec   total: 4363
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00127|coverage|INFO|lflow_cache_add_expr      10.2/sec     6.117/sec        0.1464/sec   total: 559
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00128|coverage|INFO|consider_logical_flow     93.4/sec   164.600/sec        3.8944/sec   total: 14347
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00129|coverage|INFO|lflow_run                  0.0/sec     0.150/sec        0.0061/sec   total: 22
Nov 26 23:41:37 compute-0 ovn_controller[97697]: 2025-11-26T23:41:37Z|00130|coverage|INFO|121 events never hit
Nov 26 23:41:37 compute-0 nova_compute[189387]: 2025-11-26 23:41:37.233 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.537 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.539 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.559 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.646 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.647 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.659 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.659 189391 INFO nova.compute.claims [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.813 189391 DEBUG nova.compute.provider_tree [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.831 189391 DEBUG nova.scheduler.client.report [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.860 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.861 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.912 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.912 189391 DEBUG nova.network.neutron [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.928 189391 INFO nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:41:38 compute-0 nova_compute[189387]: 2025-11-26 23:41:38.948 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.066 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.067 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.068 189391 INFO nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Creating image(s)#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.068 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "/var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.069 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "/var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.070 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "/var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.087 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.130 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.131 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.132 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.156 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.170 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.171 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.172 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.197 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.270 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.272 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.317 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.319 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.319 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.391 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.393 189391 DEBUG nova.virt.disk.api [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Checking if we can resize image /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.394 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.467 189391 DEBUG nova.policy [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e8515e48887c45eebf0f44cc18b2f953', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cf54dd78f02c4fc2a3dd9ae4ce3088a7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.479 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.481 189391 DEBUG nova.virt.disk.api [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Cannot resize image /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.482 189391 DEBUG nova.objects.instance [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lazy-loading 'migration_context' on Instance uuid c6b20e96-2371-4349-b934-bdb87bec59d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.499 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.500 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Ensure instance console log exists: /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.501 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.501 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.501 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.513 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.513 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.514 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:41:39 compute-0 nova_compute[189387]: 2025-11-26 23:41:39.514 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 696e6032-d12c-4533-ae7c-c510dc917f0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:41:39 compute-0 podman[250522]: 2025-11-26 23:41:39.84230429 +0000 UTC m=+0.114791187 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:41:40 compute-0 nova_compute[189387]: 2025-11-26 23:41:40.501 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:40 compute-0 ovn_controller[97697]: 2025-11-26T23:41:40Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:94:50:8a 10.100.0.10
Nov 26 23:41:40 compute-0 ovn_controller[97697]: 2025-11-26T23:41:40Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:94:50:8a 10.100.0.10
Nov 26 23:41:41 compute-0 nova_compute[189387]: 2025-11-26 23:41:41.857 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:41 compute-0 nova_compute[189387]: 2025-11-26 23:41:41.928 189391 DEBUG nova.network.neutron [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Successfully created port: 2df92f44-2be3-4cdf-b73c-654206b2997d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.259 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.279 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.280 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.280 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.281 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.281 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.293 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.325 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.326 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.326 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.326 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.434 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:42 compute-0 podman[250550]: 2025-11-26 23:41:42.49822624 +0000 UTC m=+0.097697731 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7)
Nov 26 23:41:42 compute-0 podman[250548]: 2025-11-26 23:41:42.510099907 +0000 UTC m=+0.105814439 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.513 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.514 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:42 compute-0 podman[250546]: 2025-11-26 23:41:42.516027984 +0000 UTC m=+0.131487051 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, com.redhat.component=ubi9-container, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, config_id=edpm, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 23:41:42 compute-0 podman[250547]: 2025-11-26 23:41:42.52077035 +0000 UTC m=+0.121952727 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:41:42 compute-0 podman[250549]: 2025-11-26 23:41:42.528146207 +0000 UTC m=+0.120878659 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 26 23:41:42 compute-0 nova_compute[189387]: 2025-11-26 23:41:42.580 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.082 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.084 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5182MB free_disk=72.31670761108398GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.084 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.085 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.178 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 696e6032-d12c-4533-ae7c-c510dc917f0a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.179 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance c6b20e96-2371-4349-b934-bdb87bec59d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.179 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.179 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.241 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.260 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.288 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.289 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.779 189391 DEBUG nova.network.neutron [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Successfully updated port: 2df92f44-2be3-4cdf-b73c-654206b2997d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.817 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.818 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquired lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:43 compute-0 nova_compute[189387]: 2025-11-26 23:41:43.819 189391 DEBUG nova.network.neutron [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:41:44 compute-0 nova_compute[189387]: 2025-11-26 23:41:44.036 189391 DEBUG nova.compute.manager [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received event network-changed-2df92f44-2be3-4cdf-b73c-654206b2997d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:44 compute-0 nova_compute[189387]: 2025-11-26 23:41:44.037 189391 DEBUG nova.compute.manager [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Refreshing instance network info cache due to event network-changed-2df92f44-2be3-4cdf-b73c-654206b2997d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:41:44 compute-0 nova_compute[189387]: 2025-11-26 23:41:44.038 189391 DEBUG oslo_concurrency.lockutils [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:44 compute-0 nova_compute[189387]: 2025-11-26 23:41:44.171 189391 DEBUG nova.network.neutron [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:41:44 compute-0 ovn_controller[97697]: 2025-11-26T23:41:44Z|00131|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.041 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.319 189391 DEBUG nova.network.neutron [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Updating instance_info_cache with network_info: [{"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.344 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Releasing lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.345 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Instance network_info: |[{"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.346 189391 DEBUG oslo_concurrency.lockutils [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.347 189391 DEBUG nova.network.neutron [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Refreshing network info cache for port 2df92f44-2be3-4cdf-b73c-654206b2997d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.354 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Start _get_guest_xml network_info=[{"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.368 189391 WARNING nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.381 189391 DEBUG nova.virt.libvirt.host [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.382 189391 DEBUG nova.virt.libvirt.host [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.390 189391 DEBUG nova.virt.libvirt.host [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.391 189391 DEBUG nova.virt.libvirt.host [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.391 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.392 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.392 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.393 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.393 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.394 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.394 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.394 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.395 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.395 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.396 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.396 189391 DEBUG nova.virt.hardware [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.402 189391 DEBUG nova.virt.libvirt.vif [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:41:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-364924012',display_name='tempest-ServersTestManualDisk-server-364924012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-364924012',id=8,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSEZemh1zLRHkyjFA6VOJ4hN9BcHH+tcWiplNtszQ3uUhmrW83XS/B/QZVgx7+4tCbXNCMTldKutFWvcgFmrNFVnE8t1/IIUGRsTmnUfwx8Wlm+0uktcD2F2GRP2SUd+w==',key_name='tempest-keypair-1358674110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf54dd78f02c4fc2a3dd9ae4ce3088a7',ramdisk_id='',reservation_id='r-4ayzm0u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2042327539',owner_user_name='tempest-ServersTestManualDisk-2042327539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:41:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8515e48887c45eebf0f44cc18b2f953',uuid=c6b20e96-2371-4349-b934-bdb87bec59d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.403 189391 DEBUG nova.network.os_vif_util [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Converting VIF {"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.404 189391 DEBUG nova.network.os_vif_util [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:89:cb,bridge_name='br-int',has_traffic_filtering=True,id=2df92f44-2be3-4cdf-b73c-654206b2997d,network=Network(9252b9f6-aeac-437f-8208-641d9bceb4ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df92f44-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.405 189391 DEBUG nova.objects.instance [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid c6b20e96-2371-4349-b934-bdb87bec59d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.418 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <uuid>c6b20e96-2371-4349-b934-bdb87bec59d0</uuid>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <name>instance-00000008</name>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <nova:name>tempest-ServersTestManualDisk-server-364924012</nova:name>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:41:45</nova:creationTime>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:user uuid="e8515e48887c45eebf0f44cc18b2f953">tempest-ServersTestManualDisk-2042327539-project-member</nova:user>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:project uuid="cf54dd78f02c4fc2a3dd9ae4ce3088a7">tempest-ServersTestManualDisk-2042327539</nova:project>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        <nova:port uuid="2df92f44-2be3-4cdf-b73c-654206b2997d">
Nov 26 23:41:45 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <system>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <entry name="serial">c6b20e96-2371-4349-b934-bdb87bec59d0</entry>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <entry name="uuid">c6b20e96-2371-4349-b934-bdb87bec59d0</entry>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </system>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <os>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  </os>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <features>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  </features>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk.config"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:98:89:cb"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <target dev="tap2df92f44-2b"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/console.log" append="off"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <video>
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </video>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:41:45 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:41:45 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:41:45 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:41:45 compute-0 nova_compute[189387]: </domain>
Nov 26 23:41:45 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.419 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Preparing to wait for external event network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.419 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.420 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.420 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.421 189391 DEBUG nova.virt.libvirt.vif [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:41:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-364924012',display_name='tempest-ServersTestManualDisk-server-364924012',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-364924012',id=8,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSEZemh1zLRHkyjFA6VOJ4hN9BcHH+tcWiplNtszQ3uUhmrW83XS/B/QZVgx7+4tCbXNCMTldKutFWvcgFmrNFVnE8t1/IIUGRsTmnUfwx8Wlm+0uktcD2F2GRP2SUd+w==',key_name='tempest-keypair-1358674110',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cf54dd78f02c4fc2a3dd9ae4ce3088a7',ramdisk_id='',reservation_id='r-4ayzm0u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-2042327539',owner_user_name='tempest-ServersTestManualDisk-2042327539-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:41:39Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8515e48887c45eebf0f44cc18b2f953',uuid=c6b20e96-2371-4349-b934-bdb87bec59d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.422 189391 DEBUG nova.network.os_vif_util [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Converting VIF {"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.422 189391 DEBUG nova.network.os_vif_util [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:89:cb,bridge_name='br-int',has_traffic_filtering=True,id=2df92f44-2be3-4cdf-b73c-654206b2997d,network=Network(9252b9f6-aeac-437f-8208-641d9bceb4ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df92f44-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.423 189391 DEBUG os_vif [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:89:cb,bridge_name='br-int',has_traffic_filtering=True,id=2df92f44-2be3-4cdf-b73c-654206b2997d,network=Network(9252b9f6-aeac-437f-8208-641d9bceb4ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df92f44-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.424 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.425 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.425 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.430 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.431 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2df92f44-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.431 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2df92f44-2b, col_values=(('external_ids', {'iface-id': '2df92f44-2be3-4cdf-b73c-654206b2997d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:89:cb', 'vm-uuid': 'c6b20e96-2371-4349-b934-bdb87bec59d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.434 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:45 compute-0 NetworkManager[56227]: <info>  [1764200505.4355] manager: (tap2df92f44-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.436 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.454 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.456 189391 INFO os_vif [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:89:cb,bridge_name='br-int',has_traffic_filtering=True,id=2df92f44-2be3-4cdf-b73c-654206b2997d,network=Network(9252b9f6-aeac-437f-8208-641d9bceb4ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df92f44-2b')#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.504 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.518 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.518 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.519 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] No VIF found with MAC fa:16:3e:98:89:cb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.520 189391 INFO nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Using config drive#033[00m
Nov 26 23:41:45 compute-0 nova_compute[189387]: 2025-11-26 23:41:45.664 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.165 189391 INFO nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Creating config drive at /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk.config#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.179 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn51w3xld execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.328 189391 DEBUG oslo_concurrency.processutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn51w3xld" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:41:46 compute-0 kernel: tap2df92f44-2b: entered promiscuous mode
Nov 26 23:41:46 compute-0 ovn_controller[97697]: 2025-11-26T23:41:46Z|00132|binding|INFO|Claiming lport 2df92f44-2be3-4cdf-b73c-654206b2997d for this chassis.
Nov 26 23:41:46 compute-0 ovn_controller[97697]: 2025-11-26T23:41:46Z|00133|binding|INFO|2df92f44-2be3-4cdf-b73c-654206b2997d: Claiming fa:16:3e:98:89:cb 10.100.0.4
Nov 26 23:41:46 compute-0 NetworkManager[56227]: <info>  [1764200506.4240] manager: (tap2df92f44-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.425 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.434 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:89:cb 10.100.0.4'], port_security=['fa:16:3e:98:89:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c6b20e96-2371-4349-b934-bdb87bec59d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf54dd78f02c4fc2a3dd9ae4ce3088a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd9ecca15-fc29-47d4-9f89-c8fe348a00f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbc4b21a-25c8-4ee9-a886-d2e6c775a37e, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=2df92f44-2be3-4cdf-b73c-654206b2997d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.436 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 2df92f44-2be3-4cdf-b73c-654206b2997d in datapath 9252b9f6-aeac-437f-8208-641d9bceb4ae bound to our chassis#033[00m
Nov 26 23:41:46 compute-0 ovn_controller[97697]: 2025-11-26T23:41:46Z|00134|binding|INFO|Setting lport 2df92f44-2be3-4cdf-b73c-654206b2997d ovn-installed in OVS
Nov 26 23:41:46 compute-0 ovn_controller[97697]: 2025-11-26T23:41:46Z|00135|binding|INFO|Setting lport 2df92f44-2be3-4cdf-b73c-654206b2997d up in Southbound
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.438 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.441 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.441 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9252b9f6-aeac-437f-8208-641d9bceb4ae#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.443 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.456 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[41d4f380-fc72-4ef8-8c27-d3f0f1fc3232]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.457 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9252b9f6-a1 in ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.461 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9252b9f6-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.461 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[af767cca-04b9-4936-a06e-982e33cb77f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.464 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[27ae23c7-6599-493a-acce-09b3f58df56d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 systemd-udevd[250662]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:41:46 compute-0 NetworkManager[56227]: <info>  [1764200506.4861] device (tap2df92f44-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.484 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[0545fe69-018f-4d6e-8816-4bf1006ae5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 NetworkManager[56227]: <info>  [1764200506.4902] device (tap2df92f44-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:41:46 compute-0 systemd-machined[155674]: New machine qemu-8-instance-00000008.
Nov 26 23:41:46 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.511 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f845b577-3e24-45d1-bb73-f24f509f57f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.546 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[736c64df-7e81-4b94-9601-aa58f3c81e78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 systemd-udevd[250666]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:41:46 compute-0 NetworkManager[56227]: <info>  [1764200506.5583] manager: (tap9252b9f6-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.557 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c8d2de71-434b-4c65-99bb-d4991ff4d21f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.591 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[514ee3fe-f8ab-4887-bb21-415969147d62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.600 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d599bc0f-414a-4f2e-af6a-03709d221257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 NetworkManager[56227]: <info>  [1764200506.6319] device (tap9252b9f6-a0): carrier: link connected
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.636 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[85ea6ecb-ec07-4ea3-a5da-2a3fb17c8b7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.662 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c996c0c4-0988-48f6-a538-1dabd6476de0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9252b9f6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:52:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518928, 'reachable_time': 38113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250695, 'error': None, 'target': 'ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.690 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[7761c238-6403-4909-8e97-350a8be4de74]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe82:521c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 518928, 'tstamp': 518928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250696, 'error': None, 'target': 'ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.713 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[9005c56c-27e0-4ddf-8378-918f1040a0a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9252b9f6-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:82:52:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 26], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518928, 'reachable_time': 38113, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250697, 'error': None, 'target': 'ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.759 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[07808f50-6b78-49e1-9448-363e6813bfca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.840 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[604fd45d-8f7a-4d2e-93d8-839bc63b26a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.841 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200506.8414743, c6b20e96-2371-4349-b934-bdb87bec59d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.842 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] VM Started (Lifecycle Event)#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.842 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9252b9f6-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.843 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.843 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9252b9f6-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:46 compute-0 NetworkManager[56227]: <info>  [1764200506.8456] manager: (tap9252b9f6-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.844 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 kernel: tap9252b9f6-a0: entered promiscuous mode
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.849 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.851 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9252b9f6-a0, col_values=(('external_ids', {'iface-id': 'a400a621-2cb6-47dd-b3b6-9db43e78906a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.852 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 ovn_controller[97697]: 2025-11-26T23:41:46Z|00136|binding|INFO|Releasing lport a400a621-2cb6-47dd-b3b6-9db43e78906a from this chassis (sb_readonly=0)
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.865 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.872 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200506.841552, c6b20e96-2371-4349-b934-bdb87bec59d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.872 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.880 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.882 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9252b9f6-aeac-437f-8208-641d9bceb4ae.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9252b9f6-aeac-437f-8208-641d9bceb4ae.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.883 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9372b9-6e99-447f-a8e9-44ae51aa4fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.884 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-9252b9f6-aeac-437f-8208-641d9bceb4ae
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/9252b9f6-aeac-437f-8208-641d9bceb4ae.pid.haproxy
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID 9252b9f6-aeac-437f-8208-641d9bceb4ae
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:41:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:46.884 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'env', 'PROCESS_TAG=haproxy-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9252b9f6-aeac-437f-8208-641d9bceb4ae.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.890 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.897 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:41:46 compute-0 nova_compute[189387]: 2025-11-26 23:41:46.914 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.133 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.134 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.134 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.380 189391 DEBUG nova.compute.manager [req-2f1d8c90-8c41-45cd-aaa0-1350ecd90367 req-12e34c59-8706-4918-b401-4781cf7ac953 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received event network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.382 189391 DEBUG oslo_concurrency.lockutils [req-2f1d8c90-8c41-45cd-aaa0-1350ecd90367 req-12e34c59-8706-4918-b401-4781cf7ac953 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.383 189391 DEBUG oslo_concurrency.lockutils [req-2f1d8c90-8c41-45cd-aaa0-1350ecd90367 req-12e34c59-8706-4918-b401-4781cf7ac953 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.384 189391 DEBUG oslo_concurrency.lockutils [req-2f1d8c90-8c41-45cd-aaa0-1350ecd90367 req-12e34c59-8706-4918-b401-4781cf7ac953 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.385 189391 DEBUG nova.compute.manager [req-2f1d8c90-8c41-45cd-aaa0-1350ecd90367 req-12e34c59-8706-4918-b401-4781cf7ac953 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Processing event network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.388 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.393 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.395 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200507.395668, c6b20e96-2371-4349-b934-bdb87bec59d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.396 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.403 189391 INFO nova.virt.libvirt.driver [-] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Instance spawned successfully.#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.404 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:41:47 compute-0 podman[250736]: 2025-11-26 23:41:47.417968969 +0000 UTC m=+0.080834673 container create e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.432 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.442 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.447 189391 DEBUG nova.network.neutron [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Updated VIF entry in instance network info cache for port 2df92f44-2be3-4cdf-b73c-654206b2997d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.449 189391 DEBUG nova.network.neutron [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Updating instance_info_cache with network_info: [{"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.452 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.452 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.453 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.454 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.454 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.455 189391 DEBUG nova.virt.libvirt.driver [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:41:47 compute-0 systemd[1]: Started libpod-conmon-e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578.scope.
Nov 26 23:41:47 compute-0 podman[250736]: 2025-11-26 23:41:47.377844931 +0000 UTC m=+0.040710725 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.487 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.488 189391 DEBUG oslo_concurrency.lockutils [req-1884320d-72b1-45d2-9e8d-f670e16f0a21 req-04d62dfd-f27a-4d73-9be5-63be75ac59dd f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:47 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/018a2b0128cf339a0555e843b98869b2e420f3864d6f6ec0b5101e0c6b549f7b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.513 189391 INFO nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Took 8.45 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.514 189391 DEBUG nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:41:47 compute-0 podman[250736]: 2025-11-26 23:41:47.537556974 +0000 UTC m=+0.200422728 container init e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 26 23:41:47 compute-0 podman[250736]: 2025-11-26 23:41:47.547326103 +0000 UTC m=+0.210191837 container start e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Nov 26 23:41:47 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [NOTICE]   (250756) : New worker (250758) forked
Nov 26 23:41:47 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [NOTICE]   (250756) : Loading success.
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.577 189391 INFO nova.compute.manager [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Took 8.97 seconds to build instance.#033[00m
Nov 26 23:41:47 compute-0 nova_compute[189387]: 2025-11-26 23:41:47.598 189391 DEBUG oslo_concurrency.lockutils [None req-c29bbf84-1049-4874-bc18-6b9803aecc65 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:48 compute-0 nova_compute[189387]: 2025-11-26 23:41:48.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.003 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.517 189391 DEBUG nova.compute.manager [req-16369e0f-f3e6-4cc7-97a1-7fa18c6a453f req-d07bccbf-f41d-4b4a-8850-179faaf2346b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received event network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.517 189391 DEBUG oslo_concurrency.lockutils [req-16369e0f-f3e6-4cc7-97a1-7fa18c6a453f req-d07bccbf-f41d-4b4a-8850-179faaf2346b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.517 189391 DEBUG oslo_concurrency.lockutils [req-16369e0f-f3e6-4cc7-97a1-7fa18c6a453f req-d07bccbf-f41d-4b4a-8850-179faaf2346b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.518 189391 DEBUG oslo_concurrency.lockutils [req-16369e0f-f3e6-4cc7-97a1-7fa18c6a453f req-d07bccbf-f41d-4b4a-8850-179faaf2346b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.518 189391 DEBUG nova.compute.manager [req-16369e0f-f3e6-4cc7-97a1-7fa18c6a453f req-d07bccbf-f41d-4b4a-8850-179faaf2346b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] No waiting events found dispatching network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:41:49 compute-0 nova_compute[189387]: 2025-11-26 23:41:49.518 189391 WARNING nova.compute.manager [req-16369e0f-f3e6-4cc7-97a1-7fa18c6a453f req-d07bccbf-f41d-4b4a-8850-179faaf2346b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received unexpected event network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d for instance with vm_state active and task_state None.#033[00m
Nov 26 23:41:50 compute-0 nova_compute[189387]: 2025-11-26 23:41:50.436 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:50 compute-0 nova_compute[189387]: 2025-11-26 23:41:50.508 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.369 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.371 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.372 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.373 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.374 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.376 189391 INFO nova.compute.manager [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Terminating instance#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.379 189391 DEBUG nova.compute.manager [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:41:51 compute-0 kernel: tap2df92f44-2b (unregistering): left promiscuous mode
Nov 26 23:41:51 compute-0 NetworkManager[56227]: <info>  [1764200511.4093] device (tap2df92f44-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.423 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 ovn_controller[97697]: 2025-11-26T23:41:51Z|00137|binding|INFO|Releasing lport 2df92f44-2be3-4cdf-b73c-654206b2997d from this chassis (sb_readonly=0)
Nov 26 23:41:51 compute-0 ovn_controller[97697]: 2025-11-26T23:41:51Z|00138|binding|INFO|Setting lport 2df92f44-2be3-4cdf-b73c-654206b2997d down in Southbound
Nov 26 23:41:51 compute-0 ovn_controller[97697]: 2025-11-26T23:41:51Z|00139|binding|INFO|Removing iface tap2df92f44-2b ovn-installed in OVS
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.427 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.436 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:89:cb 10.100.0.4'], port_security=['fa:16:3e:98:89:cb 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'c6b20e96-2371-4349-b934-bdb87bec59d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cf54dd78f02c4fc2a3dd9ae4ce3088a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd9ecca15-fc29-47d4-9f89-c8fe348a00f1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cbc4b21a-25c8-4ee9-a886-d2e6c775a37e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=2df92f44-2be3-4cdf-b73c-654206b2997d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.438 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.442 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 2df92f44-2be3-4cdf-b73c-654206b2997d in datapath 9252b9f6-aeac-437f-8208-641d9bceb4ae unbound from our chassis#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.446 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9252b9f6-aeac-437f-8208-641d9bceb4ae, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.448 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5316a201-0b1e-4119-b249-7675b1e95d89]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.450 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae namespace which is not needed anymore#033[00m
Nov 26 23:41:51 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 26 23:41:51 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 4.720s CPU time.
Nov 26 23:41:51 compute-0 systemd-machined[155674]: Machine qemu-8-instance-00000008 terminated.
Nov 26 23:41:51 compute-0 podman[250770]: 2025-11-26 23:41:51.550873798 +0000 UTC m=+0.114881870 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.612 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.618 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.645 189391 DEBUG nova.compute.manager [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received event network-changed-2df92f44-2be3-4cdf-b73c-654206b2997d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.645 189391 DEBUG nova.compute.manager [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Refreshing instance network info cache due to event network-changed-2df92f44-2be3-4cdf-b73c-654206b2997d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.645 189391 DEBUG oslo_concurrency.lockutils [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.646 189391 DEBUG oslo_concurrency.lockutils [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.646 189391 DEBUG nova.network.neutron [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Refreshing network info cache for port 2df92f44-2be3-4cdf-b73c-654206b2997d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:41:51 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [NOTICE]   (250756) : haproxy version is 2.8.14-c23fe91
Nov 26 23:41:51 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [NOTICE]   (250756) : path to executable is /usr/sbin/haproxy
Nov 26 23:41:51 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [WARNING]  (250756) : Exiting Master process...
Nov 26 23:41:51 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [WARNING]  (250756) : Exiting Master process...
Nov 26 23:41:51 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [ALERT]    (250756) : Current worker (250758) exited with code 143 (Terminated)
Nov 26 23:41:51 compute-0 neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae[250752]: [WARNING]  (250756) : All workers exited. Exiting... (0)
Nov 26 23:41:51 compute-0 systemd[1]: libpod-e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578.scope: Deactivated successfully.
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.662 189391 INFO nova.virt.libvirt.driver [-] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Instance destroyed successfully.#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.662 189391 DEBUG nova.objects.instance [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lazy-loading 'resources' on Instance uuid c6b20e96-2371-4349-b934-bdb87bec59d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:41:51 compute-0 conmon[250752]: conmon e5aed9d04b3e4f05f230 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578.scope/container/memory.events
Nov 26 23:41:51 compute-0 podman[250809]: 2025-11-26 23:41:51.665822108 +0000 UTC m=+0.081575963 container died e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.679 189391 DEBUG nova.virt.libvirt.vif [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:41:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-364924012',display_name='tempest-ServersTestManualDisk-server-364924012',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-364924012',id=8,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGSEZemh1zLRHkyjFA6VOJ4hN9BcHH+tcWiplNtszQ3uUhmrW83XS/B/QZVgx7+4tCbXNCMTldKutFWvcgFmrNFVnE8t1/IIUGRsTmnUfwx8Wlm+0uktcD2F2GRP2SUd+w==',key_name='tempest-keypair-1358674110',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:41:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cf54dd78f02c4fc2a3dd9ae4ce3088a7',ramdisk_id='',reservation_id='r-4ayzm0u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-2042327539',owner_user_name='tempest-ServersTestManualDisk-2042327539-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:41:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='e8515e48887c45eebf0f44cc18b2f953',uuid=c6b20e96-2371-4349-b934-bdb87bec59d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.679 189391 DEBUG nova.network.os_vif_util [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Converting VIF {"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.680 189391 DEBUG nova.network.os_vif_util [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:89:cb,bridge_name='br-int',has_traffic_filtering=True,id=2df92f44-2be3-4cdf-b73c-654206b2997d,network=Network(9252b9f6-aeac-437f-8208-641d9bceb4ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df92f44-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.680 189391 DEBUG os_vif [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:89:cb,bridge_name='br-int',has_traffic_filtering=True,id=2df92f44-2be3-4cdf-b73c-654206b2997d,network=Network(9252b9f6-aeac-437f-8208-641d9bceb4ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df92f44-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.682 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.682 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2df92f44-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.684 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.687 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.690 189391 INFO os_vif [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:89:cb,bridge_name='br-int',has_traffic_filtering=True,id=2df92f44-2be3-4cdf-b73c-654206b2997d,network=Network(9252b9f6-aeac-437f-8208-641d9bceb4ae),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2df92f44-2b')#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.691 189391 INFO nova.virt.libvirt.driver [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Deleting instance files /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0_del#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.692 189391 INFO nova.virt.libvirt.driver [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Deletion of /var/lib/nova/instances/c6b20e96-2371-4349-b934-bdb87bec59d0_del complete#033[00m
Nov 26 23:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578-userdata-shm.mount: Deactivated successfully.
Nov 26 23:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-018a2b0128cf339a0555e843b98869b2e420f3864d6f6ec0b5101e0c6b549f7b-merged.mount: Deactivated successfully.
Nov 26 23:41:51 compute-0 podman[250809]: 2025-11-26 23:41:51.721309225 +0000 UTC m=+0.137063080 container cleanup e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 23:41:51 compute-0 systemd[1]: libpod-conmon-e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578.scope: Deactivated successfully.
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.758 189391 INFO nova.compute.manager [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.758 189391 DEBUG oslo.service.loopingcall [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.759 189391 DEBUG nova.compute.manager [-] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.759 189391 DEBUG nova.network.neutron [-] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:41:51 compute-0 podman[250854]: 2025-11-26 23:41:51.809161593 +0000 UTC m=+0.059537495 container remove e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.819 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[81deeef2-8e50-435c-bdbf-1b0b6cb3afa0]: (4, ('Wed Nov 26 11:41:51 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae (e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578)\ne5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578\nWed Nov 26 11:41:51 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae (e5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578)\ne5aed9d04b3e4f05f2308af18707355efd3111acb281923dd5f72c4d803dc578\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.821 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[36e6025b-7593-466b-b3cf-8741cdda8a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.822 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9252b9f6-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:41:51 compute-0 kernel: tap9252b9f6-a0: left promiscuous mode
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.825 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 nova_compute[189387]: 2025-11-26 23:41:51.836 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.838 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[aa634932-cd32-4383-86a9-19c7d20f9878]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.851 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[b49918a5-a076-4543-96a1-17bc0125f348]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.853 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[05f572b6-3ac6-419a-bba7-766f920995f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.870 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[80f6d3a0-8aa3-4fe7-8ae8-b803308fde1f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 518919, 'reachable_time': 16507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250868, 'error': None, 'target': 'ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.875 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9252b9f6-aeac-437f-8208-641d9bceb4ae deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:41:51 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:41:51.875 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[692beebc-ec15-44a4-a26a-e5c3d1dac219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:41:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d9252b9f6\x2daeac\x2d437f\x2d8208\x2d641d9bceb4ae.mount: Deactivated successfully.
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.393 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.597 189391 DEBUG nova.network.neutron [-] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.620 189391 INFO nova.compute.manager [-] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Took 1.86 seconds to deallocate network for instance.#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.688 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.689 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.783 189391 DEBUG nova.compute.manager [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received event network-vif-unplugged-2df92f44-2be3-4cdf-b73c-654206b2997d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.784 189391 DEBUG oslo_concurrency.lockutils [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.785 189391 DEBUG oslo_concurrency.lockutils [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.786 189391 DEBUG oslo_concurrency.lockutils [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.787 189391 DEBUG nova.compute.manager [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] No waiting events found dispatching network-vif-unplugged-2df92f44-2be3-4cdf-b73c-654206b2997d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.787 189391 WARNING nova.compute.manager [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received unexpected event network-vif-unplugged-2df92f44-2be3-4cdf-b73c-654206b2997d for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.788 189391 DEBUG nova.compute.manager [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received event network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.789 189391 DEBUG oslo_concurrency.lockutils [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.789 189391 DEBUG oslo_concurrency.lockutils [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.790 189391 DEBUG oslo_concurrency.lockutils [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.791 189391 DEBUG nova.compute.manager [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] No waiting events found dispatching network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.792 189391 WARNING nova.compute.manager [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received unexpected event network-vif-plugged-2df92f44-2be3-4cdf-b73c-654206b2997d for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.792 189391 DEBUG nova.compute.manager [req-2f8def47-9810-4afc-9f22-b7297b479896 req-6c06f023-0ccd-4a74-b947-57f6d3788c22 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Received event network-vif-deleted-2df92f44-2be3-4cdf-b73c-654206b2997d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.806 189391 DEBUG nova.compute.provider_tree [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.824 189391 DEBUG nova.scheduler.client.report [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.850 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.877 189391 INFO nova.scheduler.client.report [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Deleted allocations for instance c6b20e96-2371-4349-b934-bdb87bec59d0#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.910 189391 DEBUG nova.network.neutron [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Updated VIF entry in instance network info cache for port 2df92f44-2be3-4cdf-b73c-654206b2997d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.911 189391 DEBUG nova.network.neutron [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Updating instance_info_cache with network_info: [{"id": "2df92f44-2be3-4cdf-b73c-654206b2997d", "address": "fa:16:3e:98:89:cb", "network": {"id": "9252b9f6-aeac-437f-8208-641d9bceb4ae", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-669404946-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cf54dd78f02c4fc2a3dd9ae4ce3088a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2df92f44-2b", "ovs_interfaceid": "2df92f44-2be3-4cdf-b73c-654206b2997d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.967 189391 DEBUG oslo_concurrency.lockutils [None req-650ae985-ec53-4213-9656-f72832ae1656 e8515e48887c45eebf0f44cc18b2f953 cf54dd78f02c4fc2a3dd9ae4ce3088a7 - - default default] Lock "c6b20e96-2371-4349-b934-bdb87bec59d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:41:53 compute-0 nova_compute[189387]: 2025-11-26 23:41:53.969 189391 DEBUG oslo_concurrency.lockutils [req-5a25f036-1f5c-42f8-922a-24c6e0b7d068 req-c10e0f62-ce78-49e6-b48a-9f1d9fed4c1f f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-c6b20e96-2371-4349-b934-bdb87bec59d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:41:54 compute-0 nova_compute[189387]: 2025-11-26 23:41:54.839 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:55 compute-0 ovn_controller[97697]: 2025-11-26T23:41:55Z|00140|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:41:55 compute-0 nova_compute[189387]: 2025-11-26 23:41:55.245 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:55 compute-0 nova_compute[189387]: 2025-11-26 23:41:55.512 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:55 compute-0 podman[250869]: 2025-11-26 23:41:55.806999308 +0000 UTC m=+0.090564182 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:41:56 compute-0 nova_compute[189387]: 2025-11-26 23:41:56.685 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:41:59 compute-0 podman[203621]: time="2025-11-26T23:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:41:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:41:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:42:00 compute-0 nova_compute[189387]: 2025-11-26 23:42:00.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:00 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:00.243 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:42:00 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:00.244 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:42:00 compute-0 nova_compute[189387]: 2025-11-26 23:42:00.244 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:00 compute-0 nova_compute[189387]: 2025-11-26 23:42:00.516 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:00 compute-0 nova_compute[189387]: 2025-11-26 23:42:00.938 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:01.248 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:01 compute-0 openstack_network_exporter[205787]: ERROR   23:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:42:01 compute-0 openstack_network_exporter[205787]: ERROR   23:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:42:01 compute-0 openstack_network_exporter[205787]: ERROR   23:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:42:01 compute-0 openstack_network_exporter[205787]: ERROR   23:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:42:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:42:01 compute-0 openstack_network_exporter[205787]: ERROR   23:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:42:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.532 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.533 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.558 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.648 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.649 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.659 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.660 189391 INFO nova.compute.claims [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.687 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.806 189391 DEBUG nova.compute.provider_tree [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.824 189391 DEBUG nova.scheduler.client.report [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.850 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.851 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.906 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.906 189391 DEBUG nova.network.neutron [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.934 189391 INFO nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:42:01 compute-0 nova_compute[189387]: 2025-11-26 23:42:01.950 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.059 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.061 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.061 189391 INFO nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Creating image(s)#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.062 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "/var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.062 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "/var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.063 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "/var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.080 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.132 189391 DEBUG nova.policy [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6a001028c92e48d0b5914bef72937111', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '41a6ffab20ee4735b3f190a1e087aed2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.173 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.174 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.174 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.185 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.238 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.239 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.281 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.282 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.282 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.351 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.352 189391 DEBUG nova.virt.disk.api [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Checking if we can resize image /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.353 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.408 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.409 189391 DEBUG nova.virt.disk.api [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Cannot resize image /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.410 189391 DEBUG nova.objects.instance [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lazy-loading 'migration_context' on Instance uuid cf0578c2-8c80-4b7e-a866-a753553c6f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.440 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.440 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Ensure instance console log exists: /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.441 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.442 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.442 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:02 compute-0 podman[250907]: 2025-11-26 23:42:02.803654556 +0000 UTC m=+0.098911765 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:42:02 compute-0 nova_compute[189387]: 2025-11-26 23:42:02.982 189391 DEBUG nova.network.neutron [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Successfully created port: d5e5a27b-2557-44b9-9b24-392e1a2c33bd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:42:03 compute-0 ovn_controller[97697]: 2025-11-26T23:42:03Z|00141|binding|INFO|Releasing lport 779990b0-f58d-4df2-b9a7-48b5134f6ea9 from this chassis (sb_readonly=0)
Nov 26 23:42:03 compute-0 nova_compute[189387]: 2025-11-26 23:42:03.838 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.171 189391 DEBUG nova.network.neutron [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Successfully updated port: d5e5a27b-2557-44b9-9b24-392e1a2c33bd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.198 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.198 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquired lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.198 189391 DEBUG nova.network.neutron [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.292 189391 DEBUG nova.compute.manager [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-changed-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.292 189391 DEBUG nova.compute.manager [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Refreshing instance network info cache due to event network-changed-d5e5a27b-2557-44b9-9b24-392e1a2c33bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.292 189391 DEBUG oslo_concurrency.lockutils [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:04 compute-0 nova_compute[189387]: 2025-11-26 23:42:04.416 189391 DEBUG nova.network.neutron [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.458 189391 DEBUG nova.network.neutron [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updating instance_info_cache with network_info: [{"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.493 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Releasing lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.494 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Instance network_info: |[{"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.495 189391 DEBUG oslo_concurrency.lockutils [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.495 189391 DEBUG nova.network.neutron [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Refreshing network info cache for port d5e5a27b-2557-44b9-9b24-392e1a2c33bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.498 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Start _get_guest_xml network_info=[{"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.506 189391 WARNING nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.512 189391 DEBUG nova.virt.libvirt.host [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.513 189391 DEBUG nova.virt.libvirt.host [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.520 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.526 189391 DEBUG nova.virt.libvirt.host [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.527 189391 DEBUG nova.virt.libvirt.host [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.527 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.528 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.528 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.529 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.529 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.529 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.529 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.529 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.530 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.530 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.530 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.531 189391 DEBUG nova.virt.hardware [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.535 189391 DEBUG nova.virt.libvirt.vif [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:42:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-647630909',display_name='tempest-TestNetworkBasicOps-server-647630909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-647630909',id=9,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKBVGkstngapUY9m82a680mxXz9lnXQYezDKbSNcLxIbEJr7iMwiK+lPpiPQRUyqGO2qKz9xbpOo2CkdLxDv6r6xZvkZysoo9t6UxaWs6cIXf8J/N0PiyT8UZowknUb2CQ==',key_name='tempest-TestNetworkBasicOps-658321597',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41a6ffab20ee4735b3f190a1e087aed2',ramdisk_id='',reservation_id='r-v20zfk65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1869958511',owner_user_name='tempest-TestNetworkBasicOps-1869958511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:02Z,user_data=None,user_id='6a001028c92e48d0b5914bef72937111',uuid=cf0578c2-8c80-4b7e-a866-a753553c6f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.535 189391 DEBUG nova.network.os_vif_util [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converting VIF {"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.536 189391 DEBUG nova.network.os_vif_util [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:13:e3,bridge_name='br-int',has_traffic_filtering=True,id=d5e5a27b-2557-44b9-9b24-392e1a2c33bd,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e5a27b-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.537 189391 DEBUG nova.objects.instance [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lazy-loading 'pci_devices' on Instance uuid cf0578c2-8c80-4b7e-a866-a753553c6f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.555 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <uuid>cf0578c2-8c80-4b7e-a866-a753553c6f9e</uuid>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <name>instance-00000009</name>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <nova:name>tempest-TestNetworkBasicOps-server-647630909</nova:name>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:42:05</nova:creationTime>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:user uuid="6a001028c92e48d0b5914bef72937111">tempest-TestNetworkBasicOps-1869958511-project-member</nova:user>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:project uuid="41a6ffab20ee4735b3f190a1e087aed2">tempest-TestNetworkBasicOps-1869958511</nova:project>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        <nova:port uuid="d5e5a27b-2557-44b9-9b24-392e1a2c33bd">
Nov 26 23:42:05 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <system>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <entry name="serial">cf0578c2-8c80-4b7e-a866-a753553c6f9e</entry>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <entry name="uuid">cf0578c2-8c80-4b7e-a866-a753553c6f9e</entry>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </system>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <os>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  </os>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <features>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  </features>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.config"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:81:13:e3"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <target dev="tapd5e5a27b-25"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/console.log" append="off"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <video>
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </video>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:42:05 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:42:05 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:42:05 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:42:05 compute-0 nova_compute[189387]: </domain>
Nov 26 23:42:05 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.556 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Preparing to wait for external event network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.556 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.556 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.556 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.557 189391 DEBUG nova.virt.libvirt.vif [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:42:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-647630909',display_name='tempest-TestNetworkBasicOps-server-647630909',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-647630909',id=9,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKBVGkstngapUY9m82a680mxXz9lnXQYezDKbSNcLxIbEJr7iMwiK+lPpiPQRUyqGO2qKz9xbpOo2CkdLxDv6r6xZvkZysoo9t6UxaWs6cIXf8J/N0PiyT8UZowknUb2CQ==',key_name='tempest-TestNetworkBasicOps-658321597',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41a6ffab20ee4735b3f190a1e087aed2',ramdisk_id='',reservation_id='r-v20zfk65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1869958511',owner_user_name='tempest-TestNetworkBasicOps-1869958511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:02Z,user_data=None,user_id='6a001028c92e48d0b5914bef72937111',uuid=cf0578c2-8c80-4b7e-a866-a753553c6f9e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.557 189391 DEBUG nova.network.os_vif_util [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converting VIF {"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.557 189391 DEBUG nova.network.os_vif_util [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:13:e3,bridge_name='br-int',has_traffic_filtering=True,id=d5e5a27b-2557-44b9-9b24-392e1a2c33bd,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e5a27b-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.558 189391 DEBUG os_vif [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:13:e3,bridge_name='br-int',has_traffic_filtering=True,id=d5e5a27b-2557-44b9-9b24-392e1a2c33bd,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e5a27b-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.558 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.559 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.559 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.562 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.562 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd5e5a27b-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.562 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd5e5a27b-25, col_values=(('external_ids', {'iface-id': 'd5e5a27b-2557-44b9-9b24-392e1a2c33bd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:13:e3', 'vm-uuid': 'cf0578c2-8c80-4b7e-a866-a753553c6f9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.564 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:05 compute-0 NetworkManager[56227]: <info>  [1764200525.5656] manager: (tapd5e5a27b-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.567 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.573 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.573 189391 INFO os_vif [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:13:e3,bridge_name='br-int',has_traffic_filtering=True,id=d5e5a27b-2557-44b9-9b24-392e1a2c33bd,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e5a27b-25')#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.627 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.628 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.629 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] No VIF found with MAC fa:16:3e:81:13:e3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:42:05 compute-0 nova_compute[189387]: 2025-11-26 23:42:05.629 189391 INFO nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Using config drive#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.070 189391 INFO nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Creating config drive at /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.config#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.080 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst3f2i2v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.210 189391 DEBUG oslo_concurrency.processutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst3f2i2v" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:06 compute-0 kernel: tapd5e5a27b-25: entered promiscuous mode
Nov 26 23:42:06 compute-0 NetworkManager[56227]: <info>  [1764200526.2762] manager: (tapd5e5a27b-25): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Nov 26 23:42:06 compute-0 ovn_controller[97697]: 2025-11-26T23:42:06Z|00142|binding|INFO|Claiming lport d5e5a27b-2557-44b9-9b24-392e1a2c33bd for this chassis.
Nov 26 23:42:06 compute-0 ovn_controller[97697]: 2025-11-26T23:42:06Z|00143|binding|INFO|d5e5a27b-2557-44b9-9b24-392e1a2c33bd: Claiming fa:16:3e:81:13:e3 10.100.0.14
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.279 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.286 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:13:e3 10.100.0.14'], port_security=['fa:16:3e:81:13:e3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cf0578c2-8c80-4b7e-a866-a753553c6f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-865b8b48-3753-4a05-b614-ccecb1e87781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41a6ffab20ee4735b3f190a1e087aed2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e207ef1-e39e-4231-9571-b551266f6cc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5348c531-5047-446f-b828-c2a0486b273b, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=d5e5a27b-2557-44b9-9b24-392e1a2c33bd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.288 106595 INFO neutron.agent.ovn.metadata.agent [-] Port d5e5a27b-2557-44b9-9b24-392e1a2c33bd in datapath 865b8b48-3753-4a05-b614-ccecb1e87781 bound to our chassis#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.291 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 865b8b48-3753-4a05-b614-ccecb1e87781#033[00m
Nov 26 23:42:06 compute-0 ovn_controller[97697]: 2025-11-26T23:42:06Z|00144|binding|INFO|Setting lport d5e5a27b-2557-44b9-9b24-392e1a2c33bd ovn-installed in OVS
Nov 26 23:42:06 compute-0 ovn_controller[97697]: 2025-11-26T23:42:06Z|00145|binding|INFO|Setting lport d5e5a27b-2557-44b9-9b24-392e1a2c33bd up in Southbound
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.304 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca97a9e-2c36-4bcd-a482-2ef50adda6b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.306 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap865b8b48-31 in ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.308 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap865b8b48-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.308 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6defe0-dce8-4ff4-be16-c6589c9576da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.309 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f080c917-964b-46a2-b597-e71ca63c2319]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.311 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:06 compute-0 systemd-machined[155674]: New machine qemu-9-instance-00000009.
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.321 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f23b08-44c1-47d3-8007-c3d8f40491e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 26 23:42:06 compute-0 systemd-udevd[250950]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.347 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5d5264-7ef2-4ac2-9daf-c92289437897]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 NetworkManager[56227]: <info>  [1764200526.3623] device (tapd5e5a27b-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:42:06 compute-0 NetworkManager[56227]: <info>  [1764200526.3631] device (tapd5e5a27b-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.374 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1f497cc1-5336-456e-8c82-d0fae77e4f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 NetworkManager[56227]: <info>  [1764200526.3796] manager: (tap865b8b48-30): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.379 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f66393a4-417e-41ca-bf7e-ea8a84234391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 systemd-udevd[250954]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.406 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[e3e9182c-9aaa-4d69-a888-166f479f4086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.408 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6f38dd92-408d-40c4-97c4-234ca151ff82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 NetworkManager[56227]: <info>  [1764200526.4310] device (tap865b8b48-30): carrier: link connected
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.438 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[12af7a2d-f223-4f4c-95ca-bbf5c9a992ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.454 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4165e40b-ea54-431f-bcd7-dd1328b07575]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap865b8b48-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:94:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520908, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250980, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.469 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3b9ef2-00fc-4c34-aead-735e747a320e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:9436'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520908, 'tstamp': 520908}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250981, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.484 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bebc86cb-1324-4878-a3c6-313505dd807b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap865b8b48-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:94:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520908, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250982, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.515 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab19e3e-8f6b-4278-b3ee-943fa8039d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.573 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[a0cb0937-0a73-4a6a-a8ba-4ceec433472e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.577 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap865b8b48-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.578 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.578 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap865b8b48-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.579 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:06 compute-0 kernel: tap865b8b48-30: entered promiscuous mode
Nov 26 23:42:06 compute-0 NetworkManager[56227]: <info>  [1764200526.5816] manager: (tap865b8b48-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.586 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap865b8b48-30, col_values=(('external_ids', {'iface-id': '9bcac48d-895a-4cd4-ba63-78258e9255b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.588 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.588 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:06 compute-0 ovn_controller[97697]: 2025-11-26T23:42:06Z|00146|binding|INFO|Releasing lport 9bcac48d-895a-4cd4-ba63-78258e9255b2 from this chassis (sb_readonly=0)
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.589 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/865b8b48-3753-4a05-b614-ccecb1e87781.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/865b8b48-3753-4a05-b614-ccecb1e87781.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.601 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c469f7-81fc-4d67-963d-857e19fd46d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.602 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-865b8b48-3753-4a05-b614-ccecb1e87781
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/865b8b48-3753-4a05-b614-ccecb1e87781.pid.haproxy
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID 865b8b48-3753-4a05-b614-ccecb1e87781
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:42:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:06.602 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'env', 'PROCESS_TAG=haproxy-865b8b48-3753-4a05-b614-ccecb1e87781', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/865b8b48-3753-4a05-b614-ccecb1e87781.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.606 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.658 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200511.657274, c6b20e96-2371-4349-b934-bdb87bec59d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.658 189391 INFO nova.compute.manager [-] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.674 189391 DEBUG nova.compute.manager [None req-614b86ac-0f8a-44fc-b002-a59ac4caff12 - - - - - -] [instance: c6b20e96-2371-4349-b934-bdb87bec59d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.969 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200526.9678583, cf0578c2-8c80-4b7e-a866-a753553c6f9e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.969 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] VM Started (Lifecycle Event)#033[00m
Nov 26 23:42:06 compute-0 nova_compute[189387]: 2025-11-26 23:42:06.997 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:07 compute-0 podman[251020]: 2025-11-26 23:42:07.002334993 +0000 UTC m=+0.068362350 container create 8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.004 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200526.968321, cf0578c2-8c80-4b7e-a866-a753553c6f9e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.006 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.024 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.032 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.049 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:42:07 compute-0 systemd[1]: Started libpod-conmon-8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce.scope.
Nov 26 23:42:07 compute-0 podman[251020]: 2025-11-26 23:42:06.967201518 +0000 UTC m=+0.033228895 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:42:07 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea13b9f8b7b03b1eb930a0052dc45a35110715364ba2c8104f9857ad1cf33ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:42:07 compute-0 podman[251020]: 2025-11-26 23:42:07.107138173 +0000 UTC m=+0.173165550 container init 8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:42:07 compute-0 podman[251020]: 2025-11-26 23:42:07.114953132 +0000 UTC m=+0.180980479 container start 8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 26 23:42:07 compute-0 neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781[251034]: [NOTICE]   (251040) : New worker (251042) forked
Nov 26 23:42:07 compute-0 neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781[251034]: [NOTICE]   (251040) : Loading success.
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.188 189391 DEBUG nova.network.neutron [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updated VIF entry in instance network info cache for port d5e5a27b-2557-44b9-9b24-392e1a2c33bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.189 189391 DEBUG nova.network.neutron [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updating instance_info_cache with network_info: [{"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.204 189391 DEBUG oslo_concurrency.lockutils [req-73da241e-739c-4637-8227-08f76e8ad2c2 req-49128b95-0e3d-4d15-ad0e-69a579e74114 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:07 compute-0 nova_compute[189387]: 2025-11-26 23:42:07.434 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:09.651 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:09.651 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:09.652 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.019 189391 DEBUG nova.compute.manager [req-19b1c214-c162-444a-a552-b80862d92b3e req-178f7d67-aed7-49fa-b72e-9126e3b9fdfb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.020 189391 DEBUG oslo_concurrency.lockutils [req-19b1c214-c162-444a-a552-b80862d92b3e req-178f7d67-aed7-49fa-b72e-9126e3b9fdfb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.020 189391 DEBUG oslo_concurrency.lockutils [req-19b1c214-c162-444a-a552-b80862d92b3e req-178f7d67-aed7-49fa-b72e-9126e3b9fdfb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.021 189391 DEBUG oslo_concurrency.lockutils [req-19b1c214-c162-444a-a552-b80862d92b3e req-178f7d67-aed7-49fa-b72e-9126e3b9fdfb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.021 189391 DEBUG nova.compute.manager [req-19b1c214-c162-444a-a552-b80862d92b3e req-178f7d67-aed7-49fa-b72e-9126e3b9fdfb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Processing event network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.022 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.028 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200530.0275784, cf0578c2-8c80-4b7e-a866-a753553c6f9e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.029 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.032 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.038 189391 INFO nova.virt.libvirt.driver [-] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Instance spawned successfully.#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.039 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.054 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.066 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.071 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.072 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.072 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.073 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.073 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.073 189391 DEBUG nova.virt.libvirt.driver [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.111 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.144 189391 INFO nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Took 8.08 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.145 189391 DEBUG nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.222 189391 INFO nova.compute.manager [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Took 8.61 seconds to build instance.#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.245 189391 DEBUG oslo_concurrency.lockutils [None req-28349a8d-d034-479b-a9cf-d327de05133f 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.521 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:10 compute-0 nova_compute[189387]: 2025-11-26 23:42:10.565 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:10 compute-0 podman[251051]: 2025-11-26 23:42:10.844299143 +0000 UTC m=+0.143588393 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 26 23:42:11 compute-0 nova_compute[189387]: 2025-11-26 23:42:11.770 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:12 compute-0 nova_compute[189387]: 2025-11-26 23:42:12.128 189391 DEBUG nova.compute.manager [req-585beb57-30fe-4759-bbc8-0c9308b84d42 req-dfc84e68-f8fd-4cdf-94af-ba0770e2605d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:12 compute-0 nova_compute[189387]: 2025-11-26 23:42:12.128 189391 DEBUG oslo_concurrency.lockutils [req-585beb57-30fe-4759-bbc8-0c9308b84d42 req-dfc84e68-f8fd-4cdf-94af-ba0770e2605d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:12 compute-0 nova_compute[189387]: 2025-11-26 23:42:12.130 189391 DEBUG oslo_concurrency.lockutils [req-585beb57-30fe-4759-bbc8-0c9308b84d42 req-dfc84e68-f8fd-4cdf-94af-ba0770e2605d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:12 compute-0 nova_compute[189387]: 2025-11-26 23:42:12.134 189391 DEBUG oslo_concurrency.lockutils [req-585beb57-30fe-4759-bbc8-0c9308b84d42 req-dfc84e68-f8fd-4cdf-94af-ba0770e2605d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:12 compute-0 nova_compute[189387]: 2025-11-26 23:42:12.137 189391 DEBUG nova.compute.manager [req-585beb57-30fe-4759-bbc8-0c9308b84d42 req-dfc84e68-f8fd-4cdf-94af-ba0770e2605d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] No waiting events found dispatching network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:42:12 compute-0 nova_compute[189387]: 2025-11-26 23:42:12.137 189391 WARNING nova.compute.manager [req-585beb57-30fe-4759-bbc8-0c9308b84d42 req-dfc84e68-f8fd-4cdf-94af-ba0770e2605d f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received unexpected event network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd for instance with vm_state active and task_state None.#033[00m
Nov 26 23:42:12 compute-0 podman[251076]: 2025-11-26 23:42:12.824992825 +0000 UTC m=+0.102237123 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:42:12 compute-0 podman[251088]: 2025-11-26 23:42:12.828121408 +0000 UTC m=+0.086505244 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:42:12 compute-0 podman[251075]: 2025-11-26 23:42:12.836198213 +0000 UTC m=+0.120908660 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-type=git, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 26 23:42:12 compute-0 podman[251082]: 2025-11-26 23:42:12.844701709 +0000 UTC m=+0.113741549 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 26 23:42:12 compute-0 podman[251098]: 2025-11-26 23:42:12.858075476 +0000 UTC m=+0.110439462 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 26 23:42:15 compute-0 nova_compute[189387]: 2025-11-26 23:42:15.352 189391 DEBUG nova.objects.instance [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lazy-loading 'flavor' on Instance uuid 696e6032-d12c-4533-ae7c-c510dc917f0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:15 compute-0 nova_compute[189387]: 2025-11-26 23:42:15.406 189391 DEBUG oslo_concurrency.lockutils [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:15 compute-0 nova_compute[189387]: 2025-11-26 23:42:15.406 189391 DEBUG oslo_concurrency.lockutils [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:15 compute-0 nova_compute[189387]: 2025-11-26 23:42:15.525 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:15 compute-0 nova_compute[189387]: 2025-11-26 23:42:15.566 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:16 compute-0 nova_compute[189387]: 2025-11-26 23:42:16.501 189391 DEBUG nova.compute.manager [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-changed-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:16 compute-0 nova_compute[189387]: 2025-11-26 23:42:16.502 189391 DEBUG nova.compute.manager [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Refreshing instance network info cache due to event network-changed-d5e5a27b-2557-44b9-9b24-392e1a2c33bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:16 compute-0 nova_compute[189387]: 2025-11-26 23:42:16.502 189391 DEBUG oslo_concurrency.lockutils [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:16 compute-0 nova_compute[189387]: 2025-11-26 23:42:16.502 189391 DEBUG oslo_concurrency.lockutils [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:16 compute-0 nova_compute[189387]: 2025-11-26 23:42:16.502 189391 DEBUG nova.network.neutron [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Refreshing network info cache for port d5e5a27b-2557-44b9-9b24-392e1a2c33bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:17 compute-0 nova_compute[189387]: 2025-11-26 23:42:17.039 189391 DEBUG nova.network.neutron [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:42:17 compute-0 nova_compute[189387]: 2025-11-26 23:42:17.159 189391 DEBUG nova.compute.manager [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:17 compute-0 nova_compute[189387]: 2025-11-26 23:42:17.159 189391 DEBUG nova.compute.manager [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing instance network info cache due to event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:17 compute-0 nova_compute[189387]: 2025-11-26 23:42:17.159 189391 DEBUG oslo_concurrency.lockutils [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.212 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.213 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.236 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.318 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.319 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.327 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.328 189391 INFO nova.compute.claims [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.584 189391 DEBUG nova.network.neutron [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updated VIF entry in instance network info cache for port d5e5a27b-2557-44b9-9b24-392e1a2c33bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.586 189391 DEBUG nova.network.neutron [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updating instance_info_cache with network_info: [{"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.604 189391 DEBUG nova.compute.provider_tree [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.607 189391 DEBUG oslo_concurrency.lockutils [req-31752406-26ac-465b-a849-c62469a6d05d req-68593ccb-14f4-4711-9ff9-63d1a234b655 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.619 189391 DEBUG nova.scheduler.client.report [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.640 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.641 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.691 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.692 189391 DEBUG nova.network.neutron [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.718 189391 INFO nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.738 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.845 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.846 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.847 189391 INFO nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Creating image(s)#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.848 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "/var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.849 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "/var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.849 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "/var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.864 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.902 189391 DEBUG nova.network.neutron [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.919 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.920 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.921 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.932 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.948 189391 DEBUG oslo_concurrency.lockutils [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.949 189391 DEBUG nova.compute.manager [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.949 189391 DEBUG nova.compute.manager [None req-d5c1303f-cb92-4e7f-912f-8b43f24f469f 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] network_info to inject: |[{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.953 189391 DEBUG oslo_concurrency.lockutils [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.953 189391 DEBUG nova.network.neutron [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.987 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:18 compute-0 nova_compute[189387]: 2025-11-26 23:42:18.988 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.025 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.029 189391 DEBUG nova.policy [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a4055ba44a1948148b34c151da34f6e3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75af4c8383fc485a90ab9085bbabf0f8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.031 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.033 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.033 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.092 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.093 189391 DEBUG nova.virt.disk.api [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Checking if we can resize image /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.094 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.152 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.156 189391 DEBUG nova.virt.disk.api [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Cannot resize image /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.157 189391 DEBUG nova.objects.instance [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lazy-loading 'migration_context' on Instance uuid 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.182 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.183 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Ensure instance console log exists: /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.184 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.185 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.187 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.692 189391 DEBUG nova.objects.instance [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lazy-loading 'flavor' on Instance uuid 696e6032-d12c-4533-ae7c-c510dc917f0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.713 189391 DEBUG oslo_concurrency.lockutils [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:19 compute-0 nova_compute[189387]: 2025-11-26 23:42:19.876 189391 DEBUG nova.network.neutron [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Successfully created port: b298dc50-93b6-439e-8c42-b9795220b150 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.527 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.568 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.662 189391 DEBUG nova.network.neutron [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updated VIF entry in instance network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.663 189391 DEBUG nova.network.neutron [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.682 189391 DEBUG oslo_concurrency.lockutils [req-6a2ecd16-8ed3-4fb8-a0cc-81270b5de56a req-8a6d21fe-1fea-4a03-b141-a849e1edf3a4 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.683 189391 DEBUG oslo_concurrency.lockutils [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.775 189391 DEBUG nova.network.neutron [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Successfully updated port: b298dc50-93b6-439e-8c42-b9795220b150 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.792 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.794 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquired lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.794 189391 DEBUG nova.network.neutron [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.867 189391 DEBUG nova.compute.manager [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-changed-b298dc50-93b6-439e-8c42-b9795220b150 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.869 189391 DEBUG nova.compute.manager [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Refreshing instance network info cache due to event network-changed-b298dc50-93b6-439e-8c42-b9795220b150. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:20 compute-0 nova_compute[189387]: 2025-11-26 23:42:20.870 189391 DEBUG oslo_concurrency.lockutils [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:21 compute-0 nova_compute[189387]: 2025-11-26 23:42:21.013 189391 DEBUG nova.network.neutron [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:42:21 compute-0 nova_compute[189387]: 2025-11-26 23:42:21.663 189391 DEBUG nova.network.neutron [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:42:21 compute-0 nova_compute[189387]: 2025-11-26 23:42:21.766 189391 DEBUG nova.compute.manager [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:21 compute-0 nova_compute[189387]: 2025-11-26 23:42:21.767 189391 DEBUG nova.compute.manager [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing instance network info cache due to event network-changed-b2fce3d4-667e-40f1-8fad-b23b6e4286db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:21 compute-0 nova_compute[189387]: 2025-11-26 23:42:21.768 189391 DEBUG oslo_concurrency.lockutils [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:21 compute-0 podman[251186]: 2025-11-26 23:42:21.828924402 +0000 UTC m=+0.126290593 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.938 189391 DEBUG nova.network.neutron [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Updating instance_info_cache with network_info: [{"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.958 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Releasing lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.959 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Instance network_info: |[{"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.960 189391 DEBUG oslo_concurrency.lockutils [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.960 189391 DEBUG nova.network.neutron [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Refreshing network info cache for port b298dc50-93b6-439e-8c42-b9795220b150 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.963 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Start _get_guest_xml network_info=[{"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.971 189391 WARNING nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.979 189391 DEBUG nova.virt.libvirt.host [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.980 189391 DEBUG nova.virt.libvirt.host [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.985 189391 DEBUG nova.virt.libvirt.host [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.986 189391 DEBUG nova.virt.libvirt.host [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.986 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.987 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.987 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.988 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.988 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.989 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.989 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.989 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.990 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.990 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.991 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.991 189391 DEBUG nova.virt.hardware [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.994 189391 DEBUG nova.virt.libvirt.vif [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1593775238',display_name='tempest-TestServerBasicOps-server-1593775238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1593775238',id=10,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAW6WLuEWeJ3uGhCOJvpZEYHtUsyu3kMo+zjCf77nj/CKShEF5RM77Qbj9w2/a63wSpqxs7HM2PI7A3+mwx/astLsUFGUKpowR2wdWBKmdSPy3reaD8i1gUwpy4qqUlH6Q==',key_name='tempest-TestServerBasicOps-14952678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75af4c8383fc485a90ab9085bbabf0f8',ramdisk_id='',reservation_id='r-ecdm7s5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-326940996',owner_user_name='tempest-TestServerBasicOps-326940996-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a4055ba44a1948148b34c151da34f6e3',uuid=8c6c2d42-56ca-46f9-a12a-54c84adf5dbd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.995 189391 DEBUG nova.network.os_vif_util [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Converting VIF {"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.996 189391 DEBUG nova.network.os_vif_util [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:71:58,bridge_name='br-int',has_traffic_filtering=True,id=b298dc50-93b6-439e-8c42-b9795220b150,network=Network(3f903c92-a599-4991-906d-3ed8e3e8eabd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb298dc50-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:42:22 compute-0 nova_compute[189387]: 2025-11-26 23:42:22.997 189391 DEBUG nova.objects.instance [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.011 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <uuid>8c6c2d42-56ca-46f9-a12a-54c84adf5dbd</uuid>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <name>instance-0000000a</name>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <nova:name>tempest-TestServerBasicOps-server-1593775238</nova:name>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:42:22</nova:creationTime>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:user uuid="a4055ba44a1948148b34c151da34f6e3">tempest-TestServerBasicOps-326940996-project-member</nova:user>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:project uuid="75af4c8383fc485a90ab9085bbabf0f8">tempest-TestServerBasicOps-326940996</nova:project>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        <nova:port uuid="b298dc50-93b6-439e-8c42-b9795220b150">
Nov 26 23:42:23 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <system>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <entry name="serial">8c6c2d42-56ca-46f9-a12a-54c84adf5dbd</entry>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <entry name="uuid">8c6c2d42-56ca-46f9-a12a-54c84adf5dbd</entry>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </system>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <os>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  </os>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <features>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  </features>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.config"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:77:71:58"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <target dev="tapb298dc50-93"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/console.log" append="off"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <video>
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </video>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:42:23 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:42:23 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:42:23 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:42:23 compute-0 nova_compute[189387]: </domain>
Nov 26 23:42:23 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.012 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Preparing to wait for external event network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.012 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.013 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.013 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.013 189391 DEBUG nova.virt.libvirt.vif [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1593775238',display_name='tempest-TestServerBasicOps-server-1593775238',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1593775238',id=10,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAW6WLuEWeJ3uGhCOJvpZEYHtUsyu3kMo+zjCf77nj/CKShEF5RM77Qbj9w2/a63wSpqxs7HM2PI7A3+mwx/astLsUFGUKpowR2wdWBKmdSPy3reaD8i1gUwpy4qqUlH6Q==',key_name='tempest-TestServerBasicOps-14952678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75af4c8383fc485a90ab9085bbabf0f8',ramdisk_id='',reservation_id='r-ecdm7s5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-326940996',owner_user_name='tempest-TestServerBasicOps-326940996-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a4055ba44a1948148b34c151da34f6e3',uuid=8c6c2d42-56ca-46f9-a12a-54c84adf5dbd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.014 189391 DEBUG nova.network.os_vif_util [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Converting VIF {"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.014 189391 DEBUG nova.network.os_vif_util [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:71:58,bridge_name='br-int',has_traffic_filtering=True,id=b298dc50-93b6-439e-8c42-b9795220b150,network=Network(3f903c92-a599-4991-906d-3ed8e3e8eabd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb298dc50-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.015 189391 DEBUG os_vif [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:71:58,bridge_name='br-int',has_traffic_filtering=True,id=b298dc50-93b6-439e-8c42-b9795220b150,network=Network(3f903c92-a599-4991-906d-3ed8e3e8eabd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb298dc50-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.015 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.015 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.016 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.019 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.019 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb298dc50-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.019 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb298dc50-93, col_values=(('external_ids', {'iface-id': 'b298dc50-93b6-439e-8c42-b9795220b150', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:71:58', 'vm-uuid': '8c6c2d42-56ca-46f9-a12a-54c84adf5dbd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.021 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:23 compute-0 NetworkManager[56227]: <info>  [1764200543.0221] manager: (tapb298dc50-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.024 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.032 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.034 189391 INFO os_vif [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:71:58,bridge_name='br-int',has_traffic_filtering=True,id=b298dc50-93b6-439e-8c42-b9795220b150,network=Network(3f903c92-a599-4991-906d-3ed8e3e8eabd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb298dc50-93')#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.097 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.098 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.099 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] No VIF found with MAC fa:16:3e:77:71:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:42:23 compute-0 nova_compute[189387]: 2025-11-26 23:42:23.100 189391 INFO nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Using config drive#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.078 189391 INFO nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Creating config drive at /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.config#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.086 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi7ufdpoc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.220 189391 DEBUG oslo_concurrency.processutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi7ufdpoc" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.313 189391 DEBUG nova.network.neutron [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:24 compute-0 kernel: tapb298dc50-93: entered promiscuous mode
Nov 26 23:42:24 compute-0 NetworkManager[56227]: <info>  [1764200544.3236] manager: (tapb298dc50-93): new Tun device (/org/freedesktop/NetworkManager/Devices/52)
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.331 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 ovn_controller[97697]: 2025-11-26T23:42:24Z|00147|binding|INFO|Claiming lport b298dc50-93b6-439e-8c42-b9795220b150 for this chassis.
Nov 26 23:42:24 compute-0 ovn_controller[97697]: 2025-11-26T23:42:24Z|00148|binding|INFO|b298dc50-93b6-439e-8c42-b9795220b150: Claiming fa:16:3e:77:71:58 10.100.0.5
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.355 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:71:58 10.100.0.5'], port_security=['fa:16:3e:77:71:58 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8c6c2d42-56ca-46f9-a12a-54c84adf5dbd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75af4c8383fc485a90ab9085bbabf0f8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '929860e4-b70e-4cb4-804a-81241a8ff3a6 e608f18f-4caf-4bf6-b81d-d3068f814eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7386bcd-ad4e-45fd-95d3-be817d33b89f, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=b298dc50-93b6-439e-8c42-b9795220b150) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.357 106595 INFO neutron.agent.ovn.metadata.agent [-] Port b298dc50-93b6-439e-8c42-b9795220b150 in datapath 3f903c92-a599-4991-906d-3ed8e3e8eabd bound to our chassis#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.359 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f903c92-a599-4991-906d-3ed8e3e8eabd#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.364 189391 DEBUG oslo_concurrency.lockutils [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.365 189391 DEBUG nova.compute.manager [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.365 189391 DEBUG nova.compute.manager [None req-b2a5c585-d0e9-469c-92f6-f4494815bea2 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] network_info to inject: |[{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.367 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.368 189391 DEBUG oslo_concurrency.lockutils [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.368 189391 DEBUG nova.network.neutron [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Refreshing network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.370 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 ovn_controller[97697]: 2025-11-26T23:42:24Z|00149|binding|INFO|Setting lport b298dc50-93b6-439e-8c42-b9795220b150 ovn-installed in OVS
Nov 26 23:42:24 compute-0 ovn_controller[97697]: 2025-11-26T23:42:24Z|00150|binding|INFO|Setting lport b298dc50-93b6-439e-8c42-b9795220b150 up in Southbound
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.375 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 systemd-udevd[251225]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.373 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[eb047727-2ec1-484d-b25e-565fc3a8d1e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.374 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f903c92-a1 in ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.376 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f903c92-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.377 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ca77f13b-36a2-4737-a92d-77d3bfc32569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.379 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bf8c7d71-a1b3-4e68-b185-bafb03baaf6f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.393 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff9749b-599d-40f6-8d71-4e8954e991ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 NetworkManager[56227]: <info>  [1764200544.4035] device (tapb298dc50-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:42:24 compute-0 systemd-machined[155674]: New machine qemu-10-instance-0000000a.
Nov 26 23:42:24 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 26 23:42:24 compute-0 NetworkManager[56227]: <info>  [1764200544.4126] device (tapb298dc50-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.425 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[433d87e4-b886-4e2d-83f5-c49527875886]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.453 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[fbeb133d-366e-41bb-bca5-23cb57c48f8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 systemd-udevd[251231]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:42:24 compute-0 NetworkManager[56227]: <info>  [1764200544.4619] manager: (tap3f903c92-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/53)
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.462 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[e30cdbbd-52e0-44d3-96e2-65444ef7ef3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.493 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ea76c4-e170-4584-b0f3-b34704e64b4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.496 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6407009e-c07d-4a55-ae1f-dfa41ca313a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 NetworkManager[56227]: <info>  [1764200544.5179] device (tap3f903c92-a0): carrier: link connected
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.523 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a754812c-1089-4a96-a6dd-84547b079393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.540 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[db797a9c-161a-4971-8f1e-6a9557f2fcad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f903c92-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:5e:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522717, 'reachable_time': 21281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251260, 'error': None, 'target': 'ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.553 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[e892d7b6-0f54-4965-ab55-e2ae5c5cc984]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:5e4c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522717, 'tstamp': 522717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251261, 'error': None, 'target': 'ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.568 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[6d6c6ca6-0a19-41a4-91ec-95cfa320ba3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f903c92-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:5e:4c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522717, 'reachable_time': 21281, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251262, 'error': None, 'target': 'ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.599 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7dcc8e-9856-4dd7-8732-54ccec782d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.658 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ac13443f-3121-4141-baab-c814550084de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.660 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f903c92-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.660 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.660 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f903c92-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.662 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 NetworkManager[56227]: <info>  [1764200544.6630] manager: (tap3f903c92-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 26 23:42:24 compute-0 kernel: tap3f903c92-a0: entered promiscuous mode
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.669 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.671 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f903c92-a0, col_values=(('external_ids', {'iface-id': '5a5b3695-2a05-4fd3-bc2b-35e2893ba4c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.672 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 ovn_controller[97697]: 2025-11-26T23:42:24Z|00151|binding|INFO|Releasing lport 5a5b3695-2a05-4fd3-bc2b-35e2893ba4c1 from this chassis (sb_readonly=0)
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.692 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.693 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.696 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f903c92-a599-4991-906d-3ed8e3e8eabd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f903c92-a599-4991-906d-3ed8e3e8eabd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.697 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[1b476705-9b21-42f5-b66f-032552d94455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.698 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-3f903c92-a599-4991-906d-3ed8e3e8eabd
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/3f903c92-a599-4991-906d-3ed8e3e8eabd.pid.haproxy
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID 3f903c92-a599-4991-906d-3ed8e3e8eabd
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:42:24 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:24.699 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'env', 'PROCESS_TAG=haproxy-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f903c92-a599-4991-906d-3ed8e3e8eabd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.776 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200544.7763453, 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.777 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] VM Started (Lifecycle Event)#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.829 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.835 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200544.776485, 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.836 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.883 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.892 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:42:24 compute-0 nova_compute[189387]: 2025-11-26 23:42:24.997 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.053 189391 DEBUG nova.compute.manager [req-713473ce-7805-4146-bada-6872f1fae387 req-99afbcb5-5b39-431d-9e4c-1fa060f6fd47 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.054 189391 DEBUG oslo_concurrency.lockutils [req-713473ce-7805-4146-bada-6872f1fae387 req-99afbcb5-5b39-431d-9e4c-1fa060f6fd47 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.055 189391 DEBUG oslo_concurrency.lockutils [req-713473ce-7805-4146-bada-6872f1fae387 req-99afbcb5-5b39-431d-9e4c-1fa060f6fd47 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.056 189391 DEBUG oslo_concurrency.lockutils [req-713473ce-7805-4146-bada-6872f1fae387 req-99afbcb5-5b39-431d-9e4c-1fa060f6fd47 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.057 189391 DEBUG nova.compute.manager [req-713473ce-7805-4146-bada-6872f1fae387 req-99afbcb5-5b39-431d-9e4c-1fa060f6fd47 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Processing event network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.058 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.064 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200545.0637686, 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.064 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.067 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.077 189391 INFO nova.virt.libvirt.driver [-] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Instance spawned successfully.#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.077 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.143 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.156 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.157 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.158 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.159 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.160 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.161 189391 DEBUG nova.virt.libvirt.driver [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.167 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.277 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:42:25 compute-0 podman[251298]: 2025-11-26 23:42:25.210390005 +0000 UTC m=+0.061514869 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.315 189391 INFO nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Took 6.47 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.316 189391 DEBUG nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:25 compute-0 podman[251298]: 2025-11-26 23:42:25.345846761 +0000 UTC m=+0.196971575 container create 4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:42:25 compute-0 systemd[1]: Started libpod-conmon-4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359.scope.
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.403 189391 DEBUG nova.network.neutron [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Updated VIF entry in instance network info cache for port b298dc50-93b6-439e-8c42-b9795220b150. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.405 189391 DEBUG nova.network.neutron [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Updating instance_info_cache with network_info: [{"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.407 189391 INFO nova.compute.manager [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Took 7.12 seconds to build instance.#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.428 189391 DEBUG oslo_concurrency.lockutils [req-52dc5319-2bed-42a9-a1d7-37985af0f139 req-356b3dbe-54f1-4e61-a587-203afb421f03 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.437 189391 DEBUG oslo_concurrency.lockutils [None req-ed0aabbb-8102-4b9d-b511-e5908bfa8157 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:25 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79591e0d8d81ca6dbf911e2c575d313c692a44d19d86d5bcb63dbf444961091a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:42:25 compute-0 podman[251298]: 2025-11-26 23:42:25.526187512 +0000 UTC m=+0.377312346 container init 4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:42:25 compute-0 nova_compute[189387]: 2025-11-26 23:42:25.530 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:25 compute-0 podman[251298]: 2025-11-26 23:42:25.54112289 +0000 UTC m=+0.392247694 container start 4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 26 23:42:25 compute-0 neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd[251312]: [NOTICE]   (251317) : New worker (251319) forked
Nov 26 23:42:25 compute-0 neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd[251312]: [NOTICE]   (251317) : Loading success.
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.323 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.326 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.327 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.328 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.330 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.333 189391 INFO nova.compute.manager [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Terminating instance#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.335 189391 DEBUG nova.compute.manager [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.340 189391 DEBUG nova.network.neutron [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updated VIF entry in instance network info cache for port b2fce3d4-667e-40f1-8fad-b23b6e4286db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.343 189391 DEBUG nova.network.neutron [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [{"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.366 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.367 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.369 189391 DEBUG oslo_concurrency.lockutils [req-db83e635-f9fd-4dba-b7b9-3efcf77bca76 req-6e7d5956-cbc9-4b88-8c78-5f64ac16641b f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-696e6032-d12c-4533-ae7c-c510dc917f0a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:26 compute-0 kernel: tapb2fce3d4-66 (unregistering): left promiscuous mode
Nov 26 23:42:26 compute-0 NetworkManager[56227]: <info>  [1764200546.3847] device (tapb2fce3d4-66): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.399 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.411 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:26 compute-0 ovn_controller[97697]: 2025-11-26T23:42:26Z|00152|binding|INFO|Releasing lport b2fce3d4-667e-40f1-8fad-b23b6e4286db from this chassis (sb_readonly=0)
Nov 26 23:42:26 compute-0 ovn_controller[97697]: 2025-11-26T23:42:26Z|00153|binding|INFO|Setting lport b2fce3d4-667e-40f1-8fad-b23b6e4286db down in Southbound
Nov 26 23:42:26 compute-0 ovn_controller[97697]: 2025-11-26T23:42:26Z|00154|binding|INFO|Removing iface tapb2fce3d4-66 ovn-installed in OVS
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.423 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:26 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:26.437 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:94:50:8a 10.100.0.10'], port_security=['fa:16:3e:94:50:8a 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '696e6032-d12c-4533-ae7c-c510dc917f0a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cda1d63c3f9d4791a18030ebba1c1b11', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2674a8ce-e68b-41b7-9c29-4c54411c5b16', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.209'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7e8838df-2918-44ef-8ded-da51293ac711, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=b2fce3d4-667e-40f1-8fad-b23b6e4286db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:42:26 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:26.439 106595 INFO neutron.agent.ovn.metadata.agent [-] Port b2fce3d4-667e-40f1-8fad-b23b6e4286db in datapath 23864f37-12d9-4f3e-a0da-ef91c19406ac unbound from our chassis#033[00m
Nov 26 23:42:26 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:26.441 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 23864f37-12d9-4f3e-a0da-ef91c19406ac, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.442 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:26 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:26.444 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[2674b5b3-4672-4ab6-9852-63d2995dd896]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:26 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:26.446 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac namespace which is not needed anymore#033[00m
Nov 26 23:42:26 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 26 23:42:26 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000006.scope: Consumed 41.373s CPU time.
Nov 26 23:42:26 compute-0 systemd-machined[155674]: Machine qemu-7-instance-00000006 terminated.
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.493 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.493 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.502 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.502 189391 INFO nova.compute.claims [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:42:26 compute-0 podman[251328]: 2025-11-26 23:42:26.54852459 +0000 UTC m=+0.134554233 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.640 189391 INFO nova.virt.libvirt.driver [-] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Instance destroyed successfully.#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.641 189391 DEBUG nova.objects.instance [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lazy-loading 'resources' on Instance uuid 696e6032-d12c-4533-ae7c-c510dc917f0a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.688 189391 DEBUG nova.virt.libvirt.vif [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:40:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-518237589',display_name='tempest-AttachInterfacesUnderV243Test-server-518237589',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-518237589',id=6,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAeCacre+DpKbR9zR5rGfgdgg0OxLzmuU8CTtn4qnPlPeLMLpl9jSBZzyDL9JbVAxWJZsWYdBzTeeojuXVvs32m0Ze42+0Cdj57DGNt5DQ+xHdJMtxDqfVliNQonyhT4jw==',key_name='tempest-keypair-1706157709',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:41:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cda1d63c3f9d4791a18030ebba1c1b11',ramdisk_id='',reservation_id='r-6l92ar4i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1379565429',owner_user_name='tempest-AttachInterfacesUnderV243Test-1379565429-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:42:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='357477a3688848b099ed3f5f61c71771',uuid=696e6032-d12c-4533-ae7c-c510dc917f0a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.689 189391 DEBUG nova.network.os_vif_util [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Converting VIF {"id": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "address": "fa:16:3e:94:50:8a", "network": {"id": "23864f37-12d9-4f3e-a0da-ef91c19406ac", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1986799011-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cda1d63c3f9d4791a18030ebba1c1b11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb2fce3d4-66", "ovs_interfaceid": "b2fce3d4-667e-40f1-8fad-b23b6e4286db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.690 189391 DEBUG nova.network.os_vif_util [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:94:50:8a,bridge_name='br-int',has_traffic_filtering=True,id=b2fce3d4-667e-40f1-8fad-b23b6e4286db,network=Network(23864f37-12d9-4f3e-a0da-ef91c19406ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2fce3d4-66') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.691 189391 DEBUG os_vif [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:50:8a,bridge_name='br-int',has_traffic_filtering=True,id=b2fce3d4-667e-40f1-8fad-b23b6e4286db,network=Network(23864f37-12d9-4f3e-a0da-ef91c19406ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2fce3d4-66') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.693 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.693 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb2fce3d4-66, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.705 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.709 189391 INFO os_vif [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:94:50:8a,bridge_name='br-int',has_traffic_filtering=True,id=b2fce3d4-667e-40f1-8fad-b23b6e4286db,network=Network(23864f37-12d9-4f3e-a0da-ef91c19406ac),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb2fce3d4-66')#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.710 189391 INFO nova.virt.libvirt.driver [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Deleting instance files /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a_del#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.711 189391 INFO nova.virt.libvirt.driver [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Deletion of /var/lib/nova/instances/696e6032-d12c-4533-ae7c-c510dc917f0a_del complete#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.727 189391 DEBUG nova.compute.provider_tree [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:42:26 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [NOTICE]   (250218) : haproxy version is 2.8.14-c23fe91
Nov 26 23:42:26 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [NOTICE]   (250218) : path to executable is /usr/sbin/haproxy
Nov 26 23:42:26 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [WARNING]  (250218) : Exiting Master process...
Nov 26 23:42:26 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [WARNING]  (250218) : Exiting Master process...
Nov 26 23:42:26 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [ALERT]    (250218) : Current worker (250220) exited with code 143 (Terminated)
Nov 26 23:42:26 compute-0 neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac[250214]: [WARNING]  (250218) : All workers exited. Exiting... (0)
Nov 26 23:42:26 compute-0 systemd[1]: libpod-2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2.scope: Deactivated successfully.
Nov 26 23:42:26 compute-0 podman[251374]: 2025-11-26 23:42:26.744584889 +0000 UTC m=+0.134534332 container died 2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.941 189391 DEBUG nova.scheduler.client.report [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.953 189391 INFO nova.compute.manager [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Took 0.62 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.953 189391 DEBUG oslo.service.loopingcall [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.954 189391 DEBUG nova.compute.manager [-] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:42:26 compute-0 nova_compute[189387]: 2025-11-26 23:42:26.955 189391 DEBUG nova.network.neutron [-] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.058 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.059 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2-userdata-shm.mount: Deactivated successfully.
Nov 26 23:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-974ab5427ddd94e7ee1765db7fec224f9831ead7105318c16f469b33895d6b48-merged.mount: Deactivated successfully.
Nov 26 23:42:27 compute-0 podman[251374]: 2025-11-26 23:42:27.145726288 +0000 UTC m=+0.535675731 container cleanup 2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.169 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.169 189391 DEBUG nova.network.neutron [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:42:27 compute-0 systemd[1]: libpod-conmon-2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2.scope: Deactivated successfully.
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.198 189391 DEBUG nova.compute.manager [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.199 189391 DEBUG oslo_concurrency.lockutils [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.200 189391 DEBUG oslo_concurrency.lockutils [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.200 189391 DEBUG oslo_concurrency.lockutils [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.200 189391 DEBUG nova.compute.manager [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] No waiting events found dispatching network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.200 189391 WARNING nova.compute.manager [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received unexpected event network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.200 189391 DEBUG nova.compute.manager [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-vif-unplugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.201 189391 DEBUG oslo_concurrency.lockutils [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.201 189391 DEBUG oslo_concurrency.lockutils [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.201 189391 DEBUG oslo_concurrency.lockutils [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.201 189391 DEBUG nova.compute.manager [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] No waiting events found dispatching network-vif-unplugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.202 189391 DEBUG nova.compute.manager [req-f5069ffe-d49e-4ee3-808c-e46f2ab5524a req-9b47cfcf-6634-4d9e-b141-062fc5bcaf4a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-vif-unplugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.339 189391 INFO nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:42:27 compute-0 podman[251416]: 2025-11-26 23:42:27.403803439 +0000 UTC m=+0.213179037 container remove 2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.415 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[2fde2924-f2d2-4b4a-9d18-46b0c1b067ac]: (4, ('Wed Nov 26 11:42:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac (2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2)\n2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2\nWed Nov 26 11:42:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac (2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2)\n2646903a21f877f4e80958734c700d43a4e5eb56fe3fbcc2d2ff1b81fffabed2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.417 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[106c619d-e677-4049-9480-daf06a1629d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.419 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23864f37-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.422 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:27 compute-0 kernel: tap23864f37-10: left promiscuous mode
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.442 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.445 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.456 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[18876136-2e35-4dda-a921-9d125bc6dc1a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.471 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bb5e6857-a77f-428c-b776-d780f085b7a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.475 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[718eac35-dbec-4db1-9ba1-79a0460bd543]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.490 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f14f8ad2-59a6-489a-b948-88b870b72c2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 514650, 'reachable_time': 19012, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251428, 'error': None, 'target': 'ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d23864f37\x2d12d9\x2d4f3e\x2da0da\x2def91c19406ac.mount: Deactivated successfully.
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.493 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-23864f37-12d9-4f3e-a0da-ef91c19406ac deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:42:27 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:27.494 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[a365cc1e-726a-4b15-9bde-d9b3c13b7485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.551 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.571 189391 DEBUG nova.policy [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3753fb1a520b4e088ce6979db5ae3773', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.670 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.671 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.671 189391 INFO nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Creating image(s)#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.672 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.672 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.672 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.693 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.778 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.779 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.780 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.795 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.864 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:27 compute-0 nova_compute[189387]: 2025-11-26 23:42:27.865 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.028 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk 1073741824" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.029 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.030 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.120 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.121 189391 DEBUG nova.virt.disk.api [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Checking if we can resize image /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.121 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.205 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.206 189391 DEBUG nova.virt.disk.api [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Cannot resize image /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.207 189391 DEBUG nova.objects.instance [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.319 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.319 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Ensure instance console log exists: /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.320 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.320 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.321 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.491 189391 DEBUG nova.network.neutron [-] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.518 189391 INFO nova.compute.manager [-] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Took 1.56 seconds to deallocate network for instance.#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.584 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:28 compute-0 nova_compute[189387]: 2025-11-26 23:42:28.585 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.040 189391 DEBUG nova.compute.provider_tree [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.087 189391 DEBUG nova.scheduler.client.report [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.140 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.256 189391 INFO nova.scheduler.client.report [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Deleted allocations for instance 696e6032-d12c-4533-ae7c-c510dc917f0a#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.425 189391 DEBUG nova.compute.manager [req-51ba0cfe-2c6c-43e2-b349-d7b31c61f514 req-28c1eb66-3853-4765-b0b4-7a76cc1a61c9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-vif-deleted-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.444 189391 DEBUG oslo_concurrency.lockutils [None req-23aad8e3-cce0-47e5-bcc2-8d7115271f21 357477a3688848b099ed3f5f61c71771 cda1d63c3f9d4791a18030ebba1c1b11 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.553 189391 DEBUG nova.compute.manager [req-68c7b869-97ec-4e37-a0a9-458aa8f574c0 req-c98641bb-8d06-47f1-aa90-c364e4602c2e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received event network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.553 189391 DEBUG oslo_concurrency.lockutils [req-68c7b869-97ec-4e37-a0a9-458aa8f574c0 req-c98641bb-8d06-47f1-aa90-c364e4602c2e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.553 189391 DEBUG oslo_concurrency.lockutils [req-68c7b869-97ec-4e37-a0a9-458aa8f574c0 req-c98641bb-8d06-47f1-aa90-c364e4602c2e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.554 189391 DEBUG oslo_concurrency.lockutils [req-68c7b869-97ec-4e37-a0a9-458aa8f574c0 req-c98641bb-8d06-47f1-aa90-c364e4602c2e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "696e6032-d12c-4533-ae7c-c510dc917f0a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.554 189391 DEBUG nova.compute.manager [req-68c7b869-97ec-4e37-a0a9-458aa8f574c0 req-c98641bb-8d06-47f1-aa90-c364e4602c2e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] No waiting events found dispatching network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.554 189391 WARNING nova.compute.manager [req-68c7b869-97ec-4e37-a0a9-458aa8f574c0 req-c98641bb-8d06-47f1-aa90-c364e4602c2e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Received unexpected event network-vif-plugged-b2fce3d4-667e-40f1-8fad-b23b6e4286db for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:42:29 compute-0 nova_compute[189387]: 2025-11-26 23:42:29.666 189391 DEBUG nova.network.neutron [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Successfully created port: 798557c8-33b8-48fa-ba80-092115a6af38 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:42:29 compute-0 podman[203621]: time="2025-11-26T23:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:42:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 26 23:42:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5270 "" "Go-http-client/1.1"
Nov 26 23:42:30 compute-0 nova_compute[189387]: 2025-11-26 23:42:30.533 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.105 189391 DEBUG nova.network.neutron [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Successfully updated port: 798557c8-33b8-48fa-ba80-092115a6af38 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.136 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.137 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquired lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.137 189391 DEBUG nova.network.neutron [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:42:31 compute-0 openstack_network_exporter[205787]: ERROR   23:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:42:31 compute-0 openstack_network_exporter[205787]: ERROR   23:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:42:31 compute-0 openstack_network_exporter[205787]: ERROR   23:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:42:31 compute-0 openstack_network_exporter[205787]: ERROR   23:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:42:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:42:31 compute-0 openstack_network_exporter[205787]: ERROR   23:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:42:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.499 189391 DEBUG nova.compute.manager [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-changed-b298dc50-93b6-439e-8c42-b9795220b150 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.499 189391 DEBUG nova.compute.manager [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Refreshing instance network info cache due to event network-changed-b298dc50-93b6-439e-8c42-b9795220b150. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.499 189391 DEBUG oslo_concurrency.lockutils [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.500 189391 DEBUG oslo_concurrency.lockutils [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.500 189391 DEBUG nova.network.neutron [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Refreshing network info cache for port b298dc50-93b6-439e-8c42-b9795220b150 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.513 189391 DEBUG nova.network.neutron [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.685 189391 DEBUG nova.compute.manager [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-changed-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.685 189391 DEBUG nova.compute.manager [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Refreshing instance network info cache due to event network-changed-798557c8-33b8-48fa-ba80-092115a6af38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.686 189391 DEBUG oslo_concurrency.lockutils [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:31 compute-0 nova_compute[189387]: 2025-11-26 23:42:31.696 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.427 189391 DEBUG nova.network.neutron [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updating instance_info_cache with network_info: [{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.445 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Releasing lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.447 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance network_info: |[{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.448 189391 DEBUG oslo_concurrency.lockutils [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.449 189391 DEBUG nova.network.neutron [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Refreshing network info cache for port 798557c8-33b8-48fa-ba80-092115a6af38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.454 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Start _get_guest_xml network_info=[{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.472 189391 WARNING nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.485 189391 DEBUG nova.virt.libvirt.host [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.487 189391 DEBUG nova.virt.libvirt.host [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.494 189391 DEBUG nova.virt.libvirt.host [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.495 189391 DEBUG nova.virt.libvirt.host [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.496 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.497 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.499 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.500 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.501 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.502 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.502 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.504 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.505 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.506 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.507 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.508 189391 DEBUG nova.virt.hardware [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.515 189391 DEBUG nova.virt.libvirt.vif [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-317216903',display_name='tempest-ServerActionsTestJSON-server-317216903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-317216903',id=11,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALDEq66uSnbDCnaPr9NW6WSucskLbrov7y7Lw8g6HLIB9MX0FvV85vzt5NxWgQHUlHzOWK54yVo80owjUx7VTSNbmpWR1rSDduj9dcSmqSox75C4uo2VseotetFpoaEEg==',key_name='tempest-keypair-1149430954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5cd62a5ad724aed83d939e3ba6d7fd7',ramdisk_id='',reservation_id='r-a5ssvw5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1783347258',owner_user_name='tempest-ServerActionsTestJSON-1783347258-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3753fb1a520b4e088ce6979db5ae3773',uuid=2b8e8c61-3efb-436e-87b5-35ac9fe60d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.516 189391 DEBUG nova.network.os_vif_util [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converting VIF {"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.517 189391 DEBUG nova.network.os_vif_util [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.519 189391 DEBUG nova.objects.instance [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.531 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <uuid>2b8e8c61-3efb-436e-87b5-35ac9fe60d69</uuid>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <name>instance-0000000b</name>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <nova:name>tempest-ServerActionsTestJSON-server-317216903</nova:name>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:42:32</nova:creationTime>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:user uuid="3753fb1a520b4e088ce6979db5ae3773">tempest-ServerActionsTestJSON-1783347258-project-member</nova:user>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:project uuid="b5cd62a5ad724aed83d939e3ba6d7fd7">tempest-ServerActionsTestJSON-1783347258</nova:project>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        <nova:port uuid="798557c8-33b8-48fa-ba80-092115a6af38">
Nov 26 23:42:32 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <system>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <entry name="serial">2b8e8c61-3efb-436e-87b5-35ac9fe60d69</entry>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <entry name="uuid">2b8e8c61-3efb-436e-87b5-35ac9fe60d69</entry>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </system>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <os>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  </os>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <features>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  </features>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.config"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:56:6c:8b"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <target dev="tap798557c8-33"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/console.log" append="off"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <video>
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </video>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:42:32 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:42:32 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:42:32 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:42:32 compute-0 nova_compute[189387]: </domain>
Nov 26 23:42:32 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.542 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Preparing to wait for external event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.542 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.542 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.543 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.543 189391 DEBUG nova.virt.libvirt.vif [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-317216903',display_name='tempest-ServerActionsTestJSON-server-317216903',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-317216903',id=11,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALDEq66uSnbDCnaPr9NW6WSucskLbrov7y7Lw8g6HLIB9MX0FvV85vzt5NxWgQHUlHzOWK54yVo80owjUx7VTSNbmpWR1rSDduj9dcSmqSox75C4uo2VseotetFpoaEEg==',key_name='tempest-keypair-1149430954',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b5cd62a5ad724aed83d939e3ba6d7fd7',ramdisk_id='',reservation_id='r-a5ssvw5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1783347258',owner_user_name='tempest-ServerActionsTestJSON-1783347258-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3753fb1a520b4e088ce6979db5ae3773',uuid=2b8e8c61-3efb-436e-87b5-35ac9fe60d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.543 189391 DEBUG nova.network.os_vif_util [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converting VIF {"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.544 189391 DEBUG nova.network.os_vif_util [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.544 189391 DEBUG os_vif [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.545 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.545 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.546 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.550 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.551 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap798557c8-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.552 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap798557c8-33, col_values=(('external_ids', {'iface-id': '798557c8-33b8-48fa-ba80-092115a6af38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:6c:8b', 'vm-uuid': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:32 compute-0 NetworkManager[56227]: <info>  [1764200552.5548] manager: (tap798557c8-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.554 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.561 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.566 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.567 189391 INFO os_vif [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33')#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.643 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.644 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.644 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] No VIF found with MAC fa:16:3e:56:6c:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:42:32 compute-0 nova_compute[189387]: 2025-11-26 23:42:32.645 189391 INFO nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Using config drive#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.170 189391 INFO nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Creating config drive at /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.config#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.179 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z_ooym1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.215 189391 DEBUG nova.network.neutron [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Updated VIF entry in instance network info cache for port b298dc50-93b6-439e-8c42-b9795220b150. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.217 189391 DEBUG nova.network.neutron [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Updating instance_info_cache with network_info: [{"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.247 189391 DEBUG oslo_concurrency.lockutils [req-be8058fa-e0d6-4945-8e6e-f451df957912 req-abd718cc-a913-47d0-8d21-aec972bc6d94 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.330 189391 DEBUG oslo_concurrency.processutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3z_ooym1" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:33 compute-0 kernel: tap798557c8-33: entered promiscuous mode
Nov 26 23:42:33 compute-0 NetworkManager[56227]: <info>  [1764200553.4849] manager: (tap798557c8-33): new Tun device (/org/freedesktop/NetworkManager/Devices/56)
Nov 26 23:42:33 compute-0 ovn_controller[97697]: 2025-11-26T23:42:33Z|00155|binding|INFO|Claiming lport 798557c8-33b8-48fa-ba80-092115a6af38 for this chassis.
Nov 26 23:42:33 compute-0 ovn_controller[97697]: 2025-11-26T23:42:33Z|00156|binding|INFO|798557c8-33b8-48fa-ba80-092115a6af38: Claiming fa:16:3e:56:6c:8b 10.100.0.6
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.483 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.497 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:6c:8b 10.100.0.6'], port_security=['fa:16:3e:56:6c:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4dbe9fb4-ed7b-48b4-a9c5-2b96bb554e51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0599c7c-1f2c-4f1e-9216-c20a57ddeefa, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=798557c8-33b8-48fa-ba80-092115a6af38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.499 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 798557c8-33b8-48fa-ba80-092115a6af38 in datapath d6f23c8c-9266-4c49-bc94-0b9f021c07c2 bound to our chassis#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.504 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6f23c8c-9266-4c49-bc94-0b9f021c07c2#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.509 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:33 compute-0 ovn_controller[97697]: 2025-11-26T23:42:33Z|00157|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 ovn-installed in OVS
Nov 26 23:42:33 compute-0 ovn_controller[97697]: 2025-11-26T23:42:33Z|00158|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 up in Southbound
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.519 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.520 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[909c4991-e4b9-4d29-b1ff-63d27788ebac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.524 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6f23c8c-91 in ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:42:33 compute-0 systemd-udevd[251477]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.528 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6f23c8c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.528 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[517d9f83-c5ef-449c-9682-fc1559bd293a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.532 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d6665d9c-3cf9-4550-8c68-83be90000ba3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.544 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c4e5f4-f9b1-476a-8003-e58821f24495]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 NetworkManager[56227]: <info>  [1764200553.5539] device (tap798557c8-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:42:33 compute-0 systemd-machined[155674]: New machine qemu-11-instance-0000000b.
Nov 26 23:42:33 compute-0 NetworkManager[56227]: <info>  [1764200553.5550] device (tap798557c8-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:42:33 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.573 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[cc2c54cc-944e-4b17-8145-37060820667c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 podman[251460]: 2025-11-26 23:42:33.600976253 +0000 UTC m=+0.161556812 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.622 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[54ffa8e0-684b-4730-8596-9ae3db563da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 NetworkManager[56227]: <info>  [1764200553.6496] manager: (tapd6f23c8c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/57)
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.648 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[39eaef1f-2e42-45b9-b5fb-62c580e70289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.697 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[678f3161-86b5-4919-9a0b-11059a5470f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.703 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[0db496ef-cd53-4962-b54e-9334b168b336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 NetworkManager[56227]: <info>  [1764200553.7282] device (tapd6f23c8c-90): carrier: link connected
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.735 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[05b851c9-0077-4a02-93ee-13d4dea07c86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.762 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f24f5353-3e4a-49d0-8bab-cc89ad9e6b39]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6f23c8c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:31:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523638, 'reachable_time': 28295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251519, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.775 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d6c9198b-13e8-4d40-af2b-fa9ddc0a7303]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:313b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 523638, 'tstamp': 523638}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251520, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.791 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[86d0eb92-1fd1-4d4f-aee5-0c37a3b05323]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6f23c8c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:31:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 34], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523638, 'reachable_time': 28295, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251521, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.821 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4bae9c57-1724-4625-9535-25e5cb877f2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.917 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[39d0a1e1-8e85-473d-a338-fca308e233ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.919 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6f23c8c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.920 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.920 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6f23c8c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.922 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:33 compute-0 kernel: tapd6f23c8c-90: entered promiscuous mode
Nov 26 23:42:33 compute-0 NetworkManager[56227]: <info>  [1764200553.9258] manager: (tapd6f23c8c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.926 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6f23c8c-90, col_values=(('external_ids', {'iface-id': '7b0be577-69f9-4df8-992b-e7c104217e56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:42:33 compute-0 ovn_controller[97697]: 2025-11-26T23:42:33Z|00159|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.932 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.933 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.933 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[85ce1f26-1cef-459c-a2eb-7ced52ff4738]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.934 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-d6f23c8c-9266-4c49-bc94-0b9f021c07c2
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.pid.haproxy
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID d6f23c8c-9266-4c49-bc94-0b9f021c07c2
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:42:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:42:33.935 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'env', 'PROCESS_TAG=haproxy-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.954 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.973 189391 DEBUG nova.compute.manager [req-847df173-9d39-4b30-ac67-974b74383576 req-e200e484-2319-4bf9-853d-11369ddaa9c3 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.974 189391 DEBUG oslo_concurrency.lockutils [req-847df173-9d39-4b30-ac67-974b74383576 req-e200e484-2319-4bf9-853d-11369ddaa9c3 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.975 189391 DEBUG oslo_concurrency.lockutils [req-847df173-9d39-4b30-ac67-974b74383576 req-e200e484-2319-4bf9-853d-11369ddaa9c3 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.976 189391 DEBUG oslo_concurrency.lockutils [req-847df173-9d39-4b30-ac67-974b74383576 req-e200e484-2319-4bf9-853d-11369ddaa9c3 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:33 compute-0 nova_compute[189387]: 2025-11-26 23:42:33.977 189391 DEBUG nova.compute.manager [req-847df173-9d39-4b30-ac67-974b74383576 req-e200e484-2319-4bf9-853d-11369ddaa9c3 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Processing event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.154 189391 DEBUG nova.network.neutron [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updated VIF entry in instance network info cache for port 798557c8-33b8-48fa-ba80-092115a6af38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.156 189391 DEBUG nova.network.neutron [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updating instance_info_cache with network_info: [{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.218 189391 DEBUG oslo_concurrency.lockutils [req-37654249-f164-4a89-bba0-1b45fbd65a71 req-a974e7cb-2e97-4f97-9a4d-070651efd102 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.260 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200554.2597842, 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.260 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] VM Started (Lifecycle Event)#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.263 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.269 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.274 189391 INFO nova.virt.libvirt.driver [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance spawned successfully.#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.275 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.332 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.332 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.333 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.333 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.334 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.334 189391 DEBUG nova.virt.libvirt.driver [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.344 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.350 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.461 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.462 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200554.259991, 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.462 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:42:34 compute-0 podman[251557]: 2025-11-26 23:42:34.394408006 +0000 UTC m=+0.034052807 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.545 189391 INFO nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Took 6.88 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.546 189391 DEBUG nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.555 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.573 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200554.268312, 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.574 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:42:34 compute-0 podman[251557]: 2025-11-26 23:42:34.679498596 +0000 UTC m=+0.319143417 container create 11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.756 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.763 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:42:34 compute-0 systemd[1]: Started libpod-conmon-11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7.scope.
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.804 189391 INFO nova.compute.manager [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Took 8.33 seconds to build instance.#033[00m
Nov 26 23:42:34 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:42:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61d44b59abc547f2c8918ffa04f8d496aa4c22c9ff91dc1a62123982e319499b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:42:34 compute-0 nova_compute[189387]: 2025-11-26 23:42:34.852 189391 DEBUG oslo_concurrency.lockutils [None req-d69b30b9-c899-489e-b079-655cbed13ced 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.485s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:34 compute-0 podman[251557]: 2025-11-26 23:42:34.9346955 +0000 UTC m=+0.574340331 container init 11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 26 23:42:34 compute-0 podman[251557]: 2025-11-26 23:42:34.949865434 +0000 UTC m=+0.589510225 container start 11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:42:34 compute-0 ovn_controller[97697]: 2025-11-26T23:42:34Z|00160|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:42:34 compute-0 ovn_controller[97697]: 2025-11-26T23:42:34Z|00161|binding|INFO|Releasing lport 9bcac48d-895a-4cd4-ba63-78258e9255b2 from this chassis (sb_readonly=0)
Nov 26 23:42:34 compute-0 ovn_controller[97697]: 2025-11-26T23:42:34Z|00162|binding|INFO|Releasing lport 5a5b3695-2a05-4fd3-bc2b-35e2893ba4c1 from this chassis (sb_readonly=0)
Nov 26 23:42:34 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[251571]: [NOTICE]   (251575) : New worker (251577) forked
Nov 26 23:42:34 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[251571]: [NOTICE]   (251575) : Loading success.
Nov 26 23:42:35 compute-0 nova_compute[189387]: 2025-11-26 23:42:35.064 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:35 compute-0 nova_compute[189387]: 2025-11-26 23:42:35.623 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:36 compute-0 nova_compute[189387]: 2025-11-26 23:42:36.434 189391 DEBUG nova.compute.manager [req-5d096c54-8c61-4910-9f32-a890a82507cd req-f78df07c-e9fd-464c-9594-dc5b8ed1d06a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:36 compute-0 nova_compute[189387]: 2025-11-26 23:42:36.435 189391 DEBUG oslo_concurrency.lockutils [req-5d096c54-8c61-4910-9f32-a890a82507cd req-f78df07c-e9fd-464c-9594-dc5b8ed1d06a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:36 compute-0 nova_compute[189387]: 2025-11-26 23:42:36.436 189391 DEBUG oslo_concurrency.lockutils [req-5d096c54-8c61-4910-9f32-a890a82507cd req-f78df07c-e9fd-464c-9594-dc5b8ed1d06a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:36 compute-0 nova_compute[189387]: 2025-11-26 23:42:36.437 189391 DEBUG oslo_concurrency.lockutils [req-5d096c54-8c61-4910-9f32-a890a82507cd req-f78df07c-e9fd-464c-9594-dc5b8ed1d06a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:36 compute-0 nova_compute[189387]: 2025-11-26 23:42:36.438 189391 DEBUG nova.compute.manager [req-5d096c54-8c61-4910-9f32-a890a82507cd req-f78df07c-e9fd-464c-9594-dc5b8ed1d06a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:42:36 compute-0 nova_compute[189387]: 2025-11-26 23:42:36.439 189391 WARNING nova.compute.manager [req-5d096c54-8c61-4910-9f32-a890a82507cd req-f78df07c-e9fd-464c-9594-dc5b8ed1d06a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.848 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.849 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.849 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.858 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:42:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:36.859 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2b8e8c61-3efb-436e-87b5-35ac9fe60d69 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:42:37 compute-0 nova_compute[189387]: 2025-11-26 23:42:37.555 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:37.848 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1857 Content-Type: application/json Date: Wed, 26 Nov 2025 23:42:36 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-0ae52288-444e-4e58-b5a0-e42d727d7b0f x-openstack-request-id: req-0ae52288-444e-4e58-b5a0-e42d727d7b0f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:42:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:37.849 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2b8e8c61-3efb-436e-87b5-35ac9fe60d69", "name": "tempest-ServerActionsTestJSON-server-317216903", "status": "ACTIVE", "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "user_id": "3753fb1a520b4e088ce6979db5ae3773", "metadata": {}, "hostId": "739fe0b1504efff72ee8debbf23634c38f9403facb1d407a4ac9b5d1", "image": {"id": "948c6d5b-0d46-4aec-8649-b6cdcb1a5694", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/948c6d5b-0d46-4aec-8649-b6cdcb1a5694"}]}, "flavor": {"id": "a4234b2d-ed51-4e17-ad57-a8fb6154451b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a4234b2d-ed51-4e17-ad57-a8fb6154451b"}]}, "created": "2025-11-26T23:42:24Z", "updated": "2025-11-26T23:42:34Z", "addresses": {"tempest-ServerActionsTestJSON-495565316-network": [{"version": 4, "addr": "10.100.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:56:6c:8b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2b8e8c61-3efb-436e-87b5-35ac9fe60d69"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2b8e8c61-3efb-436e-87b5-35ac9fe60d69"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1149430954", "OS-SRV-USG:launched_at": "2025-11-26T23:42:34.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1307956321"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:42:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:37.849 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2b8e8c61-3efb-436e-87b5-35ac9fe60d69 used request id req-0ae52288-444e-4e58-b5a0-e42d727d7b0f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:42:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:37.851 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'name': 'tempest-ServerActionsTestJSON-server-317216903', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'user_id': '3753fb1a520b4e088ce6979db5ae3773', 'hostId': '739fe0b1504efff72ee8debbf23634c38f9403facb1d407a4ac9b5d1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:42:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:37.855 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance cf0578c2-8c80-4b7e-a866-a753553c6f9e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:42:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:37.856 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/cf0578c2-8c80-4b7e-a866-a753553c6f9e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:42:38 compute-0 nova_compute[189387]: 2025-11-26 23:42:38.556 189391 DEBUG nova.compute.manager [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-changed-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:38.556 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1975 Content-Type: application/json Date: Wed, 26 Nov 2025 23:42:37 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2405fceb-0c39-43f5-8907-143adad74c7d x-openstack-request-id: req-2405fceb-0c39-43f5-8907-143adad74c7d _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:42:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:38.556 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "cf0578c2-8c80-4b7e-a866-a753553c6f9e", "name": "tempest-TestNetworkBasicOps-server-647630909", "status": "ACTIVE", "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "user_id": "6a001028c92e48d0b5914bef72937111", "metadata": {}, "hostId": "8203c365f19cd2b80479c3174a08a2afadeb5cfa7f317be6be74fb51", "image": {"id": "948c6d5b-0d46-4aec-8649-b6cdcb1a5694", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/948c6d5b-0d46-4aec-8649-b6cdcb1a5694"}]}, "flavor": {"id": "a4234b2d-ed51-4e17-ad57-a8fb6154451b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a4234b2d-ed51-4e17-ad57-a8fb6154451b"}]}, "created": "2025-11-26T23:42:00Z", "updated": "2025-11-26T23:42:10Z", "addresses": {"tempest-network-smoke--2066791378": [{"version": 4, "addr": "10.100.0.14", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:13:e3"}, {"version": 4, "addr": "192.168.122.238", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:81:13:e3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/cf0578c2-8c80-4b7e-a866-a753553c6f9e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/cf0578c2-8c80-4b7e-a866-a753553c6f9e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestNetworkBasicOps-658321597", "OS-SRV-USG:launched_at": "2025-11-26T23:42:10.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-1043128382"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:42:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:38.557 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/cf0578c2-8c80-4b7e-a866-a753553c6f9e used request id req-2405fceb-0c39-43f5-8907-143adad74c7d request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:42:38 compute-0 nova_compute[189387]: 2025-11-26 23:42:38.558 189391 DEBUG nova.compute.manager [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Refreshing instance network info cache due to event network-changed-798557c8-33b8-48fa-ba80-092115a6af38. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:38.558 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'cf0578c2-8c80-4b7e-a866-a753553c6f9e', 'name': 'tempest-TestNetworkBasicOps-server-647630909', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '41a6ffab20ee4735b3f190a1e087aed2', 'user_id': '6a001028c92e48d0b5914bef72937111', 'hostId': '8203c365f19cd2b80479c3174a08a2afadeb5cfa7f317be6be74fb51', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:42:38 compute-0 nova_compute[189387]: 2025-11-26 23:42:38.558 189391 DEBUG oslo_concurrency.lockutils [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:38 compute-0 nova_compute[189387]: 2025-11-26 23:42:38.559 189391 DEBUG oslo_concurrency.lockutils [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:38 compute-0 nova_compute[189387]: 2025-11-26 23:42:38.560 189391 DEBUG nova.network.neutron [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Refreshing network info cache for port 798557c8-33b8-48fa-ba80-092115a6af38 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:38.560 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:42:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:38.561 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.056 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2083 Content-Type: application/json Date: Wed, 26 Nov 2025 23:42:38 GMT Keep-Alive: timeout=5, max=98 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fd3df8ba-6a0a-415e-a9d5-c97972a471cf x-openstack-request-id: req-fd3df8ba-6a0a-415e-a9d5-c97972a471cf _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.056 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd", "name": "tempest-TestServerBasicOps-server-1593775238", "status": "ACTIVE", "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "user_id": "a4055ba44a1948148b34c151da34f6e3", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "7e9bbf98933435d5380bf9563f4a4367d042a4ff476f97f027dde40b", "image": {"id": "948c6d5b-0d46-4aec-8649-b6cdcb1a5694", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/948c6d5b-0d46-4aec-8649-b6cdcb1a5694"}]}, "flavor": {"id": "a4234b2d-ed51-4e17-ad57-a8fb6154451b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a4234b2d-ed51-4e17-ad57-a8fb6154451b"}]}, "created": "2025-11-26T23:42:17Z", "updated": "2025-11-26T23:42:25Z", "addresses": {"tempest-TestServerBasicOps-2000708722-network": [{"version": 4, "addr": "10.100.0.5", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:77:71:58"}, {"version": 4, "addr": "192.168.122.221", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:77:71:58"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-14952678", "OS-SRV-USG:launched_at": "2025-11-26T23:42:25.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-secgroup-smoke-2042690007"}, {"name": "tempest-securitygroup--1790448052"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000a", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.056 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd used request id req-fd3df8ba-6a0a-415e-a9d5-c97972a471cf request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.057 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '8c6c2d42-56ca-46f9-a12a-54c84adf5dbd', 'name': 'tempest-TestServerBasicOps-server-1593775238', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000a', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '75af4c8383fc485a90ab9085bbabf0f8', 'user_id': 'a4055ba44a1948148b34c151da34f6e3', 'hostId': '7e9bbf98933435d5380bf9563f4a4367d042a4ff476f97f027dde40b', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.058 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.058 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.060 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:42:39.059596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.060 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.061 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.062 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.062 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.063 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.063 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:42:39.063103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.067 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 / tap798557c8-33 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.067 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.071 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for cf0578c2-8c80-4b7e-a866-a753553c6f9e / tapd5e5a27b-25 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.071 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.075 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd / tapb298dc50-93 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.075 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.077 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:42:39.078710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:42:39.082057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.082 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.083 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.084 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.085 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:42:39.086736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.112 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/cpu volume: 4250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 nova_compute[189387]: 2025-11-26 23:42:39.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:39 compute-0 nova_compute[189387]: 2025-11-26 23:42:39.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:42:39 compute-0 nova_compute[189387]: 2025-11-26 23:42:39.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.151 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/cpu volume: 28450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.176 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/cpu volume: 13710000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:42:39.180524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.182 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.183 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.184 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.185 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.186 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:42:39.187937) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.205 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.206 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.225 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.226 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.243 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.244 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.245 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.246 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.249 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:42:39.248031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.248 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.249 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.250 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.251 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.252 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.253 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.254 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.254 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.256 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:42:39.255410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.255 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.257 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.257 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.258 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.260 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:42:39.261823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.262 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.263 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 2b8e8c61-3efb-436e-87b5-35ac9fe60d69: ceilometer.compute.pollsters.NoVolumeException
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.263 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.263 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance cf0578c2-8c80-4b7e-a866-a753553c6f9e: ceilometer.compute.pollsters.NoVolumeException
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.264 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.264 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd: ceilometer.compute.pollsters.NoVolumeException
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.265 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.265 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:42:39.266594) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.267 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.267 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.268 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.269 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.270 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.270 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.271 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T23:42:39.271140) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.271 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.272 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.272 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-317216903>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-647630909>, <NovaLikeServer: tempest-TestServerBasicOps-server-1593775238>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-317216903>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-647630909>, <NovaLikeServer: tempest-TestServerBasicOps-server-1593775238>]
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.273 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.273 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.274 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:42:39.274433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.275 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.275 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.276 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.277 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.277 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.278 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.278 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.278 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.279 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.279 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:42:39.279170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.321 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.bytes volume: 22775296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.332 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.394 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.394 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.441 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.442 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.442 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.442 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.443 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.443 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.443 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.443 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.443 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.443 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.444 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.444 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.444 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.445 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.445 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:42:39.443254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:42:39.444711) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.445 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.445 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.latency volume: 2469118260 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:42:39.446376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.latency volume: 382733270 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.446 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.read.latency volume: 1844439694 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.447 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.read.latency volume: 1587392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.447 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.read.latency volume: 2669515335 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.447 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.read.latency volume: 1014996 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.448 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.449 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.449 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.449 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.449 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:42:39.448356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.450 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.451 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:42:39.450432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.451 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.451 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.requests volume: 729 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.452 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:42:39.452377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.453 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.453 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.453 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:42:39.454206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.454 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.455 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.455 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.455 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.456 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:42:39.455995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.456 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.456 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:42:39.457265) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.457 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.458 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.458 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.458 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.458 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.459 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.459 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.459 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.459 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:42:39.459117) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:42:39.460417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.460 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.461 14 DEBUG ceilometer.compute.pollsters [-] cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.461 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.461 14 DEBUG ceilometer.compute.pollsters [-] 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.461 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T23:42:39.462167) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-317216903>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-647630909>, <NovaLikeServer: tempest-TestServerBasicOps-server-1593775238>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-317216903>, <NovaLikeServer: tempest-TestNetworkBasicOps-server-647630909>, <NovaLikeServer: tempest-TestServerBasicOps-server-1593775238>]
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.462 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.463 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:39 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:42:39.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:42:40 compute-0 nova_compute[189387]: 2025-11-26 23:42:40.625 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:40 compute-0 nova_compute[189387]: 2025-11-26 23:42:40.691 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:40 compute-0 nova_compute[189387]: 2025-11-26 23:42:40.692 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:40 compute-0 nova_compute[189387]: 2025-11-26 23:42:40.692 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:42:40 compute-0 nova_compute[189387]: 2025-11-26 23:42:40.692 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid cf0578c2-8c80-4b7e-a866-a753553c6f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:41 compute-0 nova_compute[189387]: 2025-11-26 23:42:41.337 189391 DEBUG nova.network.neutron [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updated VIF entry in instance network info cache for port 798557c8-33b8-48fa-ba80-092115a6af38. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:41 compute-0 nova_compute[189387]: 2025-11-26 23:42:41.338 189391 DEBUG nova.network.neutron [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updating instance_info_cache with network_info: [{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:41 compute-0 nova_compute[189387]: 2025-11-26 23:42:41.356 189391 DEBUG oslo_concurrency.lockutils [req-22484069-d86a-4252-b48b-98758dc306e7 req-09edc8d0-f1fd-47e5-bd0b-d365e8e7b222 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:41 compute-0 nova_compute[189387]: 2025-11-26 23:42:41.634 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200546.6335032, 696e6032-d12c-4533-ae7c-c510dc917f0a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:42:41 compute-0 nova_compute[189387]: 2025-11-26 23:42:41.635 189391 INFO nova.compute.manager [-] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:42:41 compute-0 nova_compute[189387]: 2025-11-26 23:42:41.663 189391 DEBUG nova.compute.manager [None req-33f6b11e-cba5-4430-bc41-f77ebd3e541d - - - - - -] [instance: 696e6032-d12c-4533-ae7c-c510dc917f0a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:42:41 compute-0 podman[251588]: 2025-11-26 23:42:41.878731697 +0000 UTC m=+0.164065489 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 26 23:42:42 compute-0 nova_compute[189387]: 2025-11-26 23:42:42.561 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:42 compute-0 nova_compute[189387]: 2025-11-26 23:42:42.864 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updating instance_info_cache with network_info: [{"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:42 compute-0 nova_compute[189387]: 2025-11-26 23:42:42.886 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:42:42 compute-0 nova_compute[189387]: 2025-11-26 23:42:42.887 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:42:43 compute-0 nova_compute[189387]: 2025-11-26 23:42:43.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:43 compute-0 nova_compute[189387]: 2025-11-26 23:42:43.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:42:43 compute-0 podman[251614]: 2025-11-26 23:42:43.840022051 +0000 UTC m=+0.127781883 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true)
Nov 26 23:42:43 compute-0 podman[251611]: 2025-11-26 23:42:43.84032538 +0000 UTC m=+0.131095382 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=edpm, vcs-type=git)
Nov 26 23:42:43 compute-0 podman[251616]: 2025-11-26 23:42:43.849043701 +0000 UTC m=+0.127128545 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Nov 26 23:42:43 compute-0 podman[251615]: 2025-11-26 23:42:43.856015097 +0000 UTC m=+0.115781954 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 26 23:42:43 compute-0 podman[251613]: 2025-11-26 23:42:43.861984466 +0000 UTC m=+0.141287483 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.147 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.148 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.149 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.149 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.244 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.326 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.340 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.419 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.426 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.492 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.493 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.563 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.573 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.635 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.636 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:44 compute-0 nova_compute[189387]: 2025-11-26 23:42:44.698 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.089 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.090 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4847MB free_disk=72.33927154541016GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.091 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.091 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.219 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance cf0578c2-8c80-4b7e-a866-a753553c6f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.221 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.221 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.222 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.222 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.331 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.354 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.390 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.391 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.607 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:45 compute-0 nova_compute[189387]: 2025-11-26 23:42:45.628 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:46 compute-0 ovn_controller[97697]: 2025-11-26T23:42:46Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:81:13:e3 10.100.0.14
Nov 26 23:42:46 compute-0 ovn_controller[97697]: 2025-11-26T23:42:46Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:81:13:e3 10.100.0.14
Nov 26 23:42:47 compute-0 nova_compute[189387]: 2025-11-26 23:42:47.388 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:47 compute-0 nova_compute[189387]: 2025-11-26 23:42:47.390 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:47 compute-0 nova_compute[189387]: 2025-11-26 23:42:47.564 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:49 compute-0 nova_compute[189387]: 2025-11-26 23:42:49.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:49 compute-0 nova_compute[189387]: 2025-11-26 23:42:49.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:49 compute-0 nova_compute[189387]: 2025-11-26 23:42:49.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:49 compute-0 nova_compute[189387]: 2025-11-26 23:42:49.229 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:50 compute-0 nova_compute[189387]: 2025-11-26 23:42:50.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:42:50 compute-0 nova_compute[189387]: 2025-11-26 23:42:50.634 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:52 compute-0 nova_compute[189387]: 2025-11-26 23:42:52.566 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:52 compute-0 podman[251732]: 2025-11-26 23:42:52.837910717 +0000 UTC m=+0.136478685 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:42:53 compute-0 nova_compute[189387]: 2025-11-26 23:42:53.969 189391 INFO nova.compute.manager [None req-6c4db439-bb5f-4af1-b12b-afe5dc057c11 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Get console output#033[00m
Nov 26 23:42:54 compute-0 nova_compute[189387]: 2025-11-26 23:42:54.061 239672 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 26 23:42:55 compute-0 nova_compute[189387]: 2025-11-26 23:42:55.636 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.563 189391 DEBUG nova.compute.manager [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-changed-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.564 189391 DEBUG nova.compute.manager [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Refreshing instance network info cache due to event network-changed-d5e5a27b-2557-44b9-9b24-392e1a2c33bd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.565 189391 DEBUG oslo_concurrency.lockutils [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.565 189391 DEBUG oslo_concurrency.lockutils [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.566 189391 DEBUG nova.network.neutron [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Refreshing network info cache for port d5e5a27b-2557-44b9-9b24-392e1a2c33bd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.633 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.634 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.650 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.720 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.721 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.736 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.737 189391 INFO nova.compute.claims [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:42:56 compute-0 podman[251750]: 2025-11-26 23:42:56.789812426 +0000 UTC m=+0.079511907 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.892 189391 DEBUG nova.compute.provider_tree [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.913 189391 DEBUG nova.scheduler.client.report [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.939 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.940 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.980 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:42:56 compute-0 nova_compute[189387]: 2025-11-26 23:42:56.981 189391 DEBUG nova.network.neutron [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.004 189391 INFO nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.024 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.114 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.117 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.118 189391 INFO nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Creating image(s)#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.120 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "/var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.120 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "/var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.122 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "/var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.143 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.246 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.249 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.251 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.279 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.350 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.352 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.394 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.396 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.145s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.396 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.454 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.455 189391 DEBUG nova.virt.disk.api [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Checking if we can resize image /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.456 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.514 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.515 189391 DEBUG nova.virt.disk.api [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Cannot resize image /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.516 189391 DEBUG nova.objects.instance [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lazy-loading 'migration_context' on Instance uuid e6b6d3cd-7df5-455b-a9eb-8209c97d3d26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.535 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.536 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Ensure instance console log exists: /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.537 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.537 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.538 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.569 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:42:57 compute-0 nova_compute[189387]: 2025-11-26 23:42:57.899 189391 DEBUG nova.policy [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '79b5e57700ff4dbb9b3442f514676ab4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4ff5d91198464ebab28183b70c2f5398', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:42:59 compute-0 podman[203621]: time="2025-11-26T23:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:42:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31988 "" "Go-http-client/1.1"
Nov 26 23:42:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5737 "" "Go-http-client/1.1"
Nov 26 23:42:59 compute-0 nova_compute[189387]: 2025-11-26 23:42:59.869 189391 DEBUG nova.network.neutron [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Successfully created port: 6a2a6963-cf06-4d69-aefb-ba67636d5477 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:42:59 compute-0 nova_compute[189387]: 2025-11-26 23:42:59.915 189391 DEBUG nova.network.neutron [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updated VIF entry in instance network info cache for port d5e5a27b-2557-44b9-9b24-392e1a2c33bd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:42:59 compute-0 nova_compute[189387]: 2025-11-26 23:42:59.917 189391 DEBUG nova.network.neutron [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updating instance_info_cache with network_info: [{"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:42:59 compute-0 nova_compute[189387]: 2025-11-26 23:42:59.957 189391 DEBUG oslo_concurrency.lockutils [req-f7cf8277-e661-42ad-9c70-90b8260f02ff req-4ce5e7a2-669f-4de6-b83f-cd3a6ffcd907 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-cf0578c2-8c80-4b7e-a866-a753553c6f9e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.550 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:00 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:00.550 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:00 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:00.553 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.573 189391 DEBUG nova.network.neutron [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Successfully updated port: 6a2a6963-cf06-4d69-aefb-ba67636d5477 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.599 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "refresh_cache-e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.599 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquired lock "refresh_cache-e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.600 189391 DEBUG nova.network.neutron [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.638 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.722 189391 DEBUG nova.compute.manager [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received event network-changed-6a2a6963-cf06-4d69-aefb-ba67636d5477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.723 189391 DEBUG nova.compute.manager [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Refreshing instance network info cache due to event network-changed-6a2a6963-cf06-4d69-aefb-ba67636d5477. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.724 189391 DEBUG oslo_concurrency.lockutils [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:43:00 compute-0 nova_compute[189387]: 2025-11-26 23:43:00.809 189391 DEBUG nova.network.neutron [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:43:01 compute-0 openstack_network_exporter[205787]: ERROR   23:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:43:01 compute-0 openstack_network_exporter[205787]: ERROR   23:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:43:01 compute-0 openstack_network_exporter[205787]: ERROR   23:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:43:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:43:01 compute-0 openstack_network_exporter[205787]: ERROR   23:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:43:01 compute-0 openstack_network_exporter[205787]: ERROR   23:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:43:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:43:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:01.556 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.651 189391 DEBUG nova.network.neutron [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Updating instance_info_cache with network_info: [{"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.667 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Releasing lock "refresh_cache-e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.667 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Instance network_info: |[{"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.667 189391 DEBUG oslo_concurrency.lockutils [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.668 189391 DEBUG nova.network.neutron [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Refreshing network info cache for port 6a2a6963-cf06-4d69-aefb-ba67636d5477 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.671 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Start _get_guest_xml network_info=[{"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.678 189391 WARNING nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.686 189391 DEBUG nova.virt.libvirt.host [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.687 189391 DEBUG nova.virt.libvirt.host [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.691 189391 DEBUG nova.virt.libvirt.host [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.692 189391 DEBUG nova.virt.libvirt.host [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.692 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.693 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.693 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.693 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.694 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.694 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.694 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.695 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.695 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.695 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.696 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.696 189391 DEBUG nova.virt.hardware [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.699 189391 DEBUG nova.virt.libvirt.vif [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:42:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-704688333',display_name='tempest-ServerAddressesTestJSON-server-704688333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-704688333',id=12,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ff5d91198464ebab28183b70c2f5398',ramdisk_id='',reservation_id='r-tnl0z1ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1516667624',owner_user_name='tempest-ServerAddressesTestJSON-1516667624-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:57Z,user_data=None,user_id='79b5e57700ff4dbb9b3442f514676ab4',uuid=e6b6d3cd-7df5-455b-a9eb-8209c97d3d26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.699 189391 DEBUG nova.network.os_vif_util [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Converting VIF {"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.700 189391 DEBUG nova.network.os_vif_util [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:2b:62,bridge_name='br-int',has_traffic_filtering=True,id=6a2a6963-cf06-4d69-aefb-ba67636d5477,network=Network(f1dc197e-6e53-4ae0-97d3-51d8d3448633),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a2a6963-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.701 189391 DEBUG nova.objects.instance [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lazy-loading 'pci_devices' on Instance uuid e6b6d3cd-7df5-455b-a9eb-8209c97d3d26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.716 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <uuid>e6b6d3cd-7df5-455b-a9eb-8209c97d3d26</uuid>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <name>instance-0000000c</name>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <nova:name>tempest-ServerAddressesTestJSON-server-704688333</nova:name>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:43:01</nova:creationTime>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:user uuid="79b5e57700ff4dbb9b3442f514676ab4">tempest-ServerAddressesTestJSON-1516667624-project-member</nova:user>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:project uuid="4ff5d91198464ebab28183b70c2f5398">tempest-ServerAddressesTestJSON-1516667624</nova:project>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        <nova:port uuid="6a2a6963-cf06-4d69-aefb-ba67636d5477">
Nov 26 23:43:01 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <system>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <entry name="serial">e6b6d3cd-7df5-455b-a9eb-8209c97d3d26</entry>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <entry name="uuid">e6b6d3cd-7df5-455b-a9eb-8209c97d3d26</entry>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </system>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <os>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  </os>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <features>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  </features>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk.config"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:53:2b:62"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <target dev="tap6a2a6963-cf"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/console.log" append="off"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <video>
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </video>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:43:01 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:43:01 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:43:01 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:43:01 compute-0 nova_compute[189387]: </domain>
Nov 26 23:43:01 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.717 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Preparing to wait for external event network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.718 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.718 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.718 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.719 189391 DEBUG nova.virt.libvirt.vif [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:42:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-704688333',display_name='tempest-ServerAddressesTestJSON-server-704688333',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-704688333',id=12,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4ff5d91198464ebab28183b70c2f5398',ramdisk_id='',reservation_id='r-tnl0z1ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1516667624',owner_user_name='tempest-ServerAddressesTestJSON-1516667624-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:42:57Z,user_data=None,user_id='79b5e57700ff4dbb9b3442f514676ab4',uuid=e6b6d3cd-7df5-455b-a9eb-8209c97d3d26,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.719 189391 DEBUG nova.network.os_vif_util [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Converting VIF {"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.720 189391 DEBUG nova.network.os_vif_util [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:2b:62,bridge_name='br-int',has_traffic_filtering=True,id=6a2a6963-cf06-4d69-aefb-ba67636d5477,network=Network(f1dc197e-6e53-4ae0-97d3-51d8d3448633),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a2a6963-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.720 189391 DEBUG os_vif [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:2b:62,bridge_name='br-int',has_traffic_filtering=True,id=6a2a6963-cf06-4d69-aefb-ba67636d5477,network=Network(f1dc197e-6e53-4ae0-97d3-51d8d3448633),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a2a6963-cf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.721 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.721 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.722 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.724 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.724 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a2a6963-cf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.725 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a2a6963-cf, col_values=(('external_ids', {'iface-id': '6a2a6963-cf06-4d69-aefb-ba67636d5477', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:2b:62', 'vm-uuid': 'e6b6d3cd-7df5-455b-a9eb-8209c97d3d26'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.728 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:01 compute-0 NetworkManager[56227]: <info>  [1764200581.7305] manager: (tap6a2a6963-cf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.730 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.743 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.744 189391 INFO os_vif [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:2b:62,bridge_name='br-int',has_traffic_filtering=True,id=6a2a6963-cf06-4d69-aefb-ba67636d5477,network=Network(f1dc197e-6e53-4ae0-97d3-51d8d3448633),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a2a6963-cf')#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.790 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.790 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.791 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] No VIF found with MAC fa:16:3e:53:2b:62, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:43:01 compute-0 nova_compute[189387]: 2025-11-26 23:43:01.791 189391 INFO nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Using config drive#033[00m
Nov 26 23:43:01 compute-0 ovn_controller[97697]: 2025-11-26T23:43:01Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:71:58 10.100.0.5
Nov 26 23:43:01 compute-0 ovn_controller[97697]: 2025-11-26T23:43:01Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:71:58 10.100.0.5
Nov 26 23:43:02 compute-0 nova_compute[189387]: 2025-11-26 23:43:02.842 189391 INFO nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Creating config drive at /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk.config#033[00m
Nov 26 23:43:02 compute-0 nova_compute[189387]: 2025-11-26 23:43:02.847 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgop7gjv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:02 compute-0 nova_compute[189387]: 2025-11-26 23:43:02.971 189391 DEBUG oslo_concurrency.processutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkgop7gjv" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:03 compute-0 kernel: tap6a2a6963-cf: entered promiscuous mode
Nov 26 23:43:03 compute-0 NetworkManager[56227]: <info>  [1764200583.0687] manager: (tap6a2a6963-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/60)
Nov 26 23:43:03 compute-0 ovn_controller[97697]: 2025-11-26T23:43:03Z|00163|binding|INFO|Claiming lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 for this chassis.
Nov 26 23:43:03 compute-0 ovn_controller[97697]: 2025-11-26T23:43:03Z|00164|binding|INFO|6a2a6963-cf06-4d69-aefb-ba67636d5477: Claiming fa:16:3e:53:2b:62 10.100.0.9
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.070 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:03 compute-0 ovn_controller[97697]: 2025-11-26T23:43:03Z|00165|binding|INFO|Setting lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 ovn-installed in OVS
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.092 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.096 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:03 compute-0 systemd-udevd[251818]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.112 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:2b:62 10.100.0.9'], port_security=['fa:16:3e:53:2b:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e6b6d3cd-7df5-455b-a9eb-8209c97d3d26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ff5d91198464ebab28183b70c2f5398', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd2be6a8a-1da2-41fa-a2bd-10e8d1aba472', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6528d72e-fd53-4ef0-bcce-bed7dc8b06e2, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=6a2a6963-cf06-4d69-aefb-ba67636d5477) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:03 compute-0 ovn_controller[97697]: 2025-11-26T23:43:03Z|00166|binding|INFO|Setting lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 up in Southbound
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.114 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 6a2a6963-cf06-4d69-aefb-ba67636d5477 in datapath f1dc197e-6e53-4ae0-97d3-51d8d3448633 bound to our chassis#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.118 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f1dc197e-6e53-4ae0-97d3-51d8d3448633#033[00m
Nov 26 23:43:03 compute-0 NetworkManager[56227]: <info>  [1764200583.1200] device (tap6a2a6963-cf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:43:03 compute-0 NetworkManager[56227]: <info>  [1764200583.1279] device (tap6a2a6963-cf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:43:03 compute-0 systemd-machined[155674]: New machine qemu-12-instance-0000000c.
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.133 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f4306ce2-949f-40ae-90d4-a686c7ddb6f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.134 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf1dc197e-61 in ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.137 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf1dc197e-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.137 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4f1a19-e062-4e31-9526-21672f4d979b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.139 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d4bb9025-da99-49fb-bba7-8c5203865c37]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.155 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[5f753b20-8a55-4d06-be56-e08324480000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.190 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[8697db9c-6ab0-4c1b-ba21-0296f3a42384]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.226 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[3769bf82-bcb6-4115-81f9-37afef274493]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 NetworkManager[56227]: <info>  [1764200583.2370] manager: (tapf1dc197e-60): new Veth device (/org/freedesktop/NetworkManager/Devices/61)
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.240 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d2cc05be-2f39-4bd3-b958-99fc1dc8a70e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.282 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[99e520db-af39-46b3-af10-2c17e3beb9ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.286 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[816ab072-8ea0-4e98-aee4-846ed26eac39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 NetworkManager[56227]: <info>  [1764200583.3149] device (tapf1dc197e-60): carrier: link connected
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.325 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[6dd12fca-ca8f-4129-bf38-7b3daa3e0ae3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.345 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2845de-acdd-49b6-8d46-6718012d3e99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1dc197e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:68:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526597, 'reachable_time': 22019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251853, 'error': None, 'target': 'ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.361 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5d603507-95e3-4406-834d-915a1b1bf6de]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe38:68f9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526597, 'tstamp': 526597}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251854, 'error': None, 'target': 'ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.379 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f958a8a7-178b-4daf-b458-7d16d36b8210]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf1dc197e-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:38:68:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526597, 'reachable_time': 22019, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251855, 'error': None, 'target': 'ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.415 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[33f68661-87d6-4d75-958d-f86c2b4ad114]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.481 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4fd14c5f-e031-4507-b3be-10e03d6749f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.484 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1dc197e-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.485 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.486 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf1dc197e-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:03 compute-0 NetworkManager[56227]: <info>  [1764200583.4896] manager: (tapf1dc197e-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 26 23:43:03 compute-0 kernel: tapf1dc197e-60: entered promiscuous mode
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.490 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.494 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf1dc197e-60, col_values=(('external_ids', {'iface-id': '9317a6c9-d5da-473b-baec-ed2dd824009d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:03 compute-0 ovn_controller[97697]: 2025-11-26T23:43:03Z|00167|binding|INFO|Releasing lport 9317a6c9-d5da-473b-baec-ed2dd824009d from this chassis (sb_readonly=0)
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.497 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.515 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.518 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f1dc197e-6e53-4ae0-97d3-51d8d3448633.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f1dc197e-6e53-4ae0-97d3-51d8d3448633.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.518 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.519 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d88a1aa9-bcac-497d-9470-04f055ff73a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.520 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-f1dc197e-6e53-4ae0-97d3-51d8d3448633
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/f1dc197e-6e53-4ae0-97d3-51d8d3448633.pid.haproxy
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID f1dc197e-6e53-4ae0-97d3-51d8d3448633
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:43:03 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:03.520 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'env', 'PROCESS_TAG=haproxy-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f1dc197e-6e53-4ae0-97d3-51d8d3448633.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.669 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200583.6673398, e6b6d3cd-7df5-455b-a9eb-8209c97d3d26 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.670 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] VM Started (Lifecycle Event)#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.700 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.709 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200583.6674483, e6b6d3cd-7df5-455b-a9eb-8209c97d3d26 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.709 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.731 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.736 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:43:03 compute-0 nova_compute[189387]: 2025-11-26 23:43:03.755 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:43:03 compute-0 podman[251872]: 2025-11-26 23:43:03.837793753 +0000 UTC m=+0.126403836 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 26 23:43:03 compute-0 podman[251911]: 2025-11-26 23:43:03.95866434 +0000 UTC m=+0.067253211 container create b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 23:43:04 compute-0 podman[251911]: 2025-11-26 23:43:03.920576967 +0000 UTC m=+0.029165878 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:43:04 compute-0 systemd[1]: Started libpod-conmon-b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75.scope.
Nov 26 23:43:04 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee73026d135375758ff85ce89300feb3eb9d6063ef0fb29b1f75e1b464c354aa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:43:04 compute-0 podman[251911]: 2025-11-26 23:43:04.083548866 +0000 UTC m=+0.192137757 container init b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:43:04 compute-0 podman[251911]: 2025-11-26 23:43:04.094831816 +0000 UTC m=+0.203420677 container start b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.110 189391 DEBUG nova.compute.manager [req-7cdd42b4-b517-4ac9-852a-6ac31ff469da req-a8cee408-eac9-4523-8718-f1d08e75afbf f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received event network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.111 189391 DEBUG oslo_concurrency.lockutils [req-7cdd42b4-b517-4ac9-852a-6ac31ff469da req-a8cee408-eac9-4523-8718-f1d08e75afbf f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.111 189391 DEBUG oslo_concurrency.lockutils [req-7cdd42b4-b517-4ac9-852a-6ac31ff469da req-a8cee408-eac9-4523-8718-f1d08e75afbf f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.111 189391 DEBUG oslo_concurrency.lockutils [req-7cdd42b4-b517-4ac9-852a-6ac31ff469da req-a8cee408-eac9-4523-8718-f1d08e75afbf f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.111 189391 DEBUG nova.compute.manager [req-7cdd42b4-b517-4ac9-852a-6ac31ff469da req-a8cee408-eac9-4523-8718-f1d08e75afbf f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Processing event network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.112 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.121 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.127 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200584.1237473, e6b6d3cd-7df5-455b-a9eb-8209c97d3d26 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.127 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.135 189391 INFO nova.virt.libvirt.driver [-] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Instance spawned successfully.#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.136 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:43:04 compute-0 neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633[251924]: [NOTICE]   (251928) : New worker (251930) forked
Nov 26 23:43:04 compute-0 neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633[251924]: [NOTICE]   (251928) : Loading success.
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.156 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.175 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.181 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.182 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.182 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.183 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.183 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.184 189391 DEBUG nova.virt.libvirt.driver [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.197 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.243 189391 INFO nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Took 7.13 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.243 189391 DEBUG nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.321 189391 INFO nova.compute.manager [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Took 7.62 seconds to build instance.#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.341 189391 DEBUG oslo_concurrency.lockutils [None req-d3e13543-4713-47f4-960c-3472e65b15a2 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.707s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.925 189391 DEBUG nova.network.neutron [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Updated VIF entry in instance network info cache for port 6a2a6963-cf06-4d69-aefb-ba67636d5477. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.925 189391 DEBUG nova.network.neutron [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Updating instance_info_cache with network_info: [{"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:04 compute-0 nova_compute[189387]: 2025-11-26 23:43:04.943 189391 DEBUG oslo_concurrency.lockutils [req-73d0c73f-6229-4b48-9fd7-16da2baf97d6 req-702dc2fc-c844-4e5a-bb1f-84a67811b991 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.643 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.970 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.970 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.974 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.974 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.975 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.975 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.975 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.977 189391 INFO nova.compute.manager [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Terminating instance#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.978 189391 DEBUG nova.compute.manager [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:43:05 compute-0 nova_compute[189387]: 2025-11-26 23:43:05.986 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:43:06 compute-0 kernel: tap6a2a6963-cf (unregistering): left promiscuous mode
Nov 26 23:43:06 compute-0 NetworkManager[56227]: <info>  [1764200586.0065] device (tap6a2a6963-cf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00168|memory|INFO|peak resident set size grew 50% in last 2553.8 seconds, from 16128 kB to 24216 kB
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00169|memory|INFO|idl-cells-OVN_Southbound:10177 idl-cells-Open_vSwitch:1041 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:352 lflow-cache-entries-cache-matches:281 lflow-cache-size-KB:1443 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:635 ofctrl_installed_flow_usage-KB:463 ofctrl_sb_flow_ref_usage-KB:240
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.014 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00170|binding|INFO|Releasing lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 from this chassis (sb_readonly=0)
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00171|binding|INFO|Setting lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 down in Southbound
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00172|binding|INFO|Removing iface tap6a2a6963-cf ovn-installed in OVS
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.030 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.033 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:2b:62 10.100.0.9'], port_security=['fa:16:3e:53:2b:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e6b6d3cd-7df5-455b-a9eb-8209c97d3d26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ff5d91198464ebab28183b70c2f5398', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2be6a8a-1da2-41fa-a2bd-10e8d1aba472', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6528d72e-fd53-4ef0-bcce-bed7dc8b06e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=6a2a6963-cf06-4d69-aefb-ba67636d5477) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.034 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 6a2a6963-cf06-4d69-aefb-ba67636d5477 in datapath f1dc197e-6e53-4ae0-97d3-51d8d3448633 unbound from our chassis#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.036 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1dc197e-6e53-4ae0-97d3-51d8d3448633, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.041 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[dd5520f4-21d1-4ff5-81e0-f6e84a84edf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.042 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633 namespace which is not needed anymore#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.049 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 26 23:43:06 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 2.480s CPU time.
Nov 26 23:43:06 compute-0 systemd-machined[155674]: Machine qemu-12-instance-0000000c terminated.
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.091 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.092 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.103 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.103 189391 INFO nova.compute.claims [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:43:06 compute-0 kernel: tap6a2a6963-cf: entered promiscuous mode
Nov 26 23:43:06 compute-0 NetworkManager[56227]: <info>  [1764200586.2041] manager: (tap6a2a6963-cf): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Nov 26 23:43:06 compute-0 kernel: tap6a2a6963-cf (unregistering): left promiscuous mode
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00173|binding|INFO|Claiming lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 for this chassis.
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00174|binding|INFO|6a2a6963-cf06-4d69-aefb-ba67636d5477: Claiming fa:16:3e:53:2b:62 10.100.0.9
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.221 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.254 189391 INFO nova.virt.libvirt.driver [-] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Instance destroyed successfully.#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.254 189391 DEBUG nova.objects.instance [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lazy-loading 'resources' on Instance uuid e6b6d3cd-7df5-455b-a9eb-8209c97d3d26 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:06 compute-0 neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633[251924]: [NOTICE]   (251928) : haproxy version is 2.8.14-c23fe91
Nov 26 23:43:06 compute-0 neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633[251924]: [NOTICE]   (251928) : path to executable is /usr/sbin/haproxy
Nov 26 23:43:06 compute-0 neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633[251924]: [WARNING]  (251928) : Exiting Master process...
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00175|if_status|INFO|Not setting lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 down as sb is readonly
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.260 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633[251924]: [ALERT]    (251928) : Current worker (251930) exited with code 143 (Terminated)
Nov 26 23:43:06 compute-0 neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633[251924]: [WARNING]  (251928) : All workers exited. Exiting... (0)
Nov 26 23:43:06 compute-0 systemd[1]: libpod-b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75.scope: Deactivated successfully.
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.265 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 podman[251959]: 2025-11-26 23:43:06.269772958 +0000 UTC m=+0.088644101 container died b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75-userdata-shm.mount: Deactivated successfully.
Nov 26 23:43:06 compute-0 ovn_controller[97697]: 2025-11-26T23:43:06Z|00176|binding|INFO|Releasing lport 6a2a6963-cf06-4d69-aefb-ba67636d5477 from this chassis (sb_readonly=0)
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.335 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:2b:62 10.100.0.9'], port_security=['fa:16:3e:53:2b:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e6b6d3cd-7df5-455b-a9eb-8209c97d3d26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ff5d91198464ebab28183b70c2f5398', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2be6a8a-1da2-41fa-a2bd-10e8d1aba472', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6528d72e-fd53-4ef0-bcce-bed7dc8b06e2, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=6a2a6963-cf06-4d69-aefb-ba67636d5477) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee73026d135375758ff85ce89300feb3eb9d6063ef0fb29b1f75e1b464c354aa-merged.mount: Deactivated successfully.
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.347 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 podman[251959]: 2025-11-26 23:43:06.352436419 +0000 UTC m=+0.171307532 container cleanup b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.361 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:2b:62 10.100.0.9'], port_security=['fa:16:3e:53:2b:62 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e6b6d3cd-7df5-455b-a9eb-8209c97d3d26', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4ff5d91198464ebab28183b70c2f5398', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd2be6a8a-1da2-41fa-a2bd-10e8d1aba472', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6528d72e-fd53-4ef0-bcce-bed7dc8b06e2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=6a2a6963-cf06-4d69-aefb-ba67636d5477) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:06 compute-0 systemd[1]: libpod-conmon-b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75.scope: Deactivated successfully.
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.375 189391 DEBUG nova.virt.libvirt.vif [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-704688333',display_name='tempest-ServerAddressesTestJSON-server-704688333',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-704688333',id=12,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:43:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4ff5d91198464ebab28183b70c2f5398',ramdisk_id='',reservation_id='r-tnl0z1ix',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1516667624',owner_user_name='tempest-ServerAddressesTestJSON-1516667624-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:43:04Z,user_data=None,user_id='79b5e57700ff4dbb9b3442f514676ab4',uuid=e6b6d3cd-7df5-455b-a9eb-8209c97d3d26,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.376 189391 DEBUG nova.network.os_vif_util [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Converting VIF {"id": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "address": "fa:16:3e:53:2b:62", "network": {"id": "f1dc197e-6e53-4ae0-97d3-51d8d3448633", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1730731300-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4ff5d91198464ebab28183b70c2f5398", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a2a6963-cf", "ovs_interfaceid": "6a2a6963-cf06-4d69-aefb-ba67636d5477", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.378 189391 DEBUG nova.network.os_vif_util [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:2b:62,bridge_name='br-int',has_traffic_filtering=True,id=6a2a6963-cf06-4d69-aefb-ba67636d5477,network=Network(f1dc197e-6e53-4ae0-97d3-51d8d3448633),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a2a6963-cf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.379 189391 DEBUG os_vif [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:2b:62,bridge_name='br-int',has_traffic_filtering=True,id=6a2a6963-cf06-4d69-aefb-ba67636d5477,network=Network(f1dc197e-6e53-4ae0-97d3-51d8d3448633),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a2a6963-cf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.385 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.387 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a2a6963-cf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.389 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.391 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.394 189391 INFO os_vif [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:2b:62,bridge_name='br-int',has_traffic_filtering=True,id=6a2a6963-cf06-4d69-aefb-ba67636d5477,network=Network(f1dc197e-6e53-4ae0-97d3-51d8d3448633),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a2a6963-cf')#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.395 189391 INFO nova.virt.libvirt.driver [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Deleting instance files /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26_del#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.396 189391 INFO nova.virt.libvirt.driver [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Deletion of /var/lib/nova/instances/e6b6d3cd-7df5-455b-a9eb-8209c97d3d26_del complete#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.430 189391 DEBUG nova.compute.manager [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received event network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.430 189391 DEBUG oslo_concurrency.lockutils [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.431 189391 DEBUG oslo_concurrency.lockutils [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.431 189391 DEBUG oslo_concurrency.lockutils [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.432 189391 DEBUG nova.compute.manager [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] No waiting events found dispatching network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.432 189391 WARNING nova.compute.manager [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received unexpected event network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 for instance with vm_state active and task_state deleting.#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.433 189391 DEBUG nova.compute.manager [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received event network-vif-unplugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.433 189391 DEBUG oslo_concurrency.lockutils [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.434 189391 DEBUG oslo_concurrency.lockutils [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.434 189391 DEBUG oslo_concurrency.lockutils [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.435 189391 DEBUG nova.compute.manager [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] No waiting events found dispatching network-vif-unplugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.435 189391 DEBUG nova.compute.manager [req-de64324f-9066-439c-9db7-9c9ac7154455 req-38574b2f-0767-4643-857c-17a60a02cc90 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received event network-vif-unplugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:43:06 compute-0 podman[252005]: 2025-11-26 23:43:06.437486693 +0000 UTC m=+0.057491061 container remove b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.452 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[e6562ed9-f856-457d-84ba-3105f46a617d]: (4, ('Wed Nov 26 11:43:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633 (b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75)\nb9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75\nWed Nov 26 11:43:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633 (b9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75)\nb9c7d29d5a36cccb1cd2a44c156aaf53b867075b594c9ee71602992e62689e75\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.453 189391 INFO nova.compute.manager [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.453 189391 DEBUG oslo.service.loopingcall [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.454 189391 DEBUG nova.compute.manager [-] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.454 189391 DEBUG nova.network.neutron [-] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.455 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[58d0559b-135d-4429-a202-144982c5dbfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.456 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf1dc197e-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:06 compute-0 kernel: tapf1dc197e-60: left promiscuous mode
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.461 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.492 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.491 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[3b73545d-a410-4dca-8730-ecce1cbb4bb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.495 189391 DEBUG nova.compute.provider_tree [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.508 189391 DEBUG nova.scheduler.client.report [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.505 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[16f0da8e-b007-494f-8e57-9e7b92a2e1cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.511 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[eb21b3eb-913f-459f-a9f7-d1ea1303f22b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.534 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[db6e278c-3666-4083-87e3-f2d34830428f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526587, 'reachable_time': 43628, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252020, 'error': None, 'target': 'ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 systemd[1]: run-netns-ovnmeta\x2df1dc197e\x2d6e53\x2d4ae0\x2d97d3\x2d51d8d3448633.mount: Deactivated successfully.
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.539 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f1dc197e-6e53-4ae0-97d3-51d8d3448633 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.539 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[0e313a64-095c-471d-9f9b-247246eb5eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.539 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 6a2a6963-cf06-4d69-aefb-ba67636d5477 in datapath f1dc197e-6e53-4ae0-97d3-51d8d3448633 unbound from our chassis#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.541 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1dc197e-6e53-4ae0-97d3-51d8d3448633, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.542 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.542 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[411e0982-96ae-47d0-a56d-a36a94e980c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.542 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 6a2a6963-cf06-4d69-aefb-ba67636d5477 in datapath f1dc197e-6e53-4ae0-97d3-51d8d3448633 unbound from our chassis#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.544 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f1dc197e-6e53-4ae0-97d3-51d8d3448633, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:43:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:06.544 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[b0033cc9-650a-49d7-ad6b-21de31fea993]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.544 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.597 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.598 189391 DEBUG nova.network.neutron [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.617 189391 INFO nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.641 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.715 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.717 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.718 189391 INFO nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Creating image(s)#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.719 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "/var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.720 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "/var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.721 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "/var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.746 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.807 189391 DEBUG nova.policy [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6a001028c92e48d0b5914bef72937111', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '41a6ffab20ee4735b3f190a1e087aed2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.812 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.813 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "4bfc824fda96e5558a690ed70963ecd686d78685" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.814 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.835 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.905 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.906 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.952 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685,backing_fmt=raw /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.953 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "4bfc824fda96e5558a690ed70963ecd686d78685" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:06 compute-0 nova_compute[189387]: 2025-11-26 23:43:06.954 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.030 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.032 189391 DEBUG nova.virt.disk.api [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Checking if we can resize image /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.032 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.105 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.106 189391 DEBUG nova.virt.disk.api [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Cannot resize image /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.106 189391 DEBUG nova.objects.instance [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lazy-loading 'migration_context' on Instance uuid 280c0e48-ae70-40a7-96ca-137efae9ea75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.125 189391 DEBUG nova.network.neutron [-] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.136 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.137 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Ensure instance console log exists: /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.137 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.138 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.138 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.149 189391 INFO nova.compute.manager [-] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Took 0.70 seconds to deallocate network for instance.#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.192 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.193 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.306 189391 DEBUG nova.compute.provider_tree [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.319 189391 DEBUG nova.scheduler.client.report [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.348 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.374 189391 INFO nova.scheduler.client.report [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Deleted allocations for instance e6b6d3cd-7df5-455b-a9eb-8209c97d3d26#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.454 189391 DEBUG oslo_concurrency.lockutils [None req-dcdfdf2c-c6d2-486a-803c-8dfd12386634 79b5e57700ff4dbb9b3442f514676ab4 4ff5d91198464ebab28183b70c2f5398 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.480s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:07 compute-0 nova_compute[189387]: 2025-11-26 23:43:07.780 189391 DEBUG nova.network.neutron [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Successfully created port: 933bd457-0cc9-4849-a69f-0f02814a844a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.544 189391 DEBUG nova.compute.manager [req-c5e8cf77-2559-4b9c-84f9-c94309d0e0af req-0fa56722-e29b-4b85-b81f-aa9f215c46ac f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received event network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.545 189391 DEBUG oslo_concurrency.lockutils [req-c5e8cf77-2559-4b9c-84f9-c94309d0e0af req-0fa56722-e29b-4b85-b81f-aa9f215c46ac f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.545 189391 DEBUG oslo_concurrency.lockutils [req-c5e8cf77-2559-4b9c-84f9-c94309d0e0af req-0fa56722-e29b-4b85-b81f-aa9f215c46ac f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.546 189391 DEBUG oslo_concurrency.lockutils [req-c5e8cf77-2559-4b9c-84f9-c94309d0e0af req-0fa56722-e29b-4b85-b81f-aa9f215c46ac f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "e6b6d3cd-7df5-455b-a9eb-8209c97d3d26-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.546 189391 DEBUG nova.compute.manager [req-c5e8cf77-2559-4b9c-84f9-c94309d0e0af req-0fa56722-e29b-4b85-b81f-aa9f215c46ac f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] No waiting events found dispatching network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.547 189391 WARNING nova.compute.manager [req-c5e8cf77-2559-4b9c-84f9-c94309d0e0af req-0fa56722-e29b-4b85-b81f-aa9f215c46ac f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received unexpected event network-vif-plugged-6a2a6963-cf06-4d69-aefb-ba67636d5477 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.547 189391 DEBUG nova.compute.manager [req-c5e8cf77-2559-4b9c-84f9-c94309d0e0af req-0fa56722-e29b-4b85-b81f-aa9f215c46ac f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Received event network-vif-deleted-6a2a6963-cf06-4d69-aefb-ba67636d5477 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.619 189391 DEBUG nova.network.neutron [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Successfully updated port: 933bd457-0cc9-4849-a69f-0f02814a844a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.643 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.644 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquired lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.644 189391 DEBUG nova.network.neutron [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:43:08 compute-0 nova_compute[189387]: 2025-11-26 23:43:08.798 189391 DEBUG nova.network.neutron [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:43:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:09.651 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:09.653 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:09.654 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.197 189391 DEBUG nova.network.neutron [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Updating instance_info_cache with network_info: [{"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.391 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Releasing lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.392 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Instance network_info: |[{"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.394 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Start _get_guest_xml network_info=[{"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.400 189391 WARNING nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.405 189391 DEBUG nova.virt.libvirt.host [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.406 189391 DEBUG nova.virt.libvirt.host [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.409 189391 DEBUG nova.virt.libvirt.host [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.409 189391 DEBUG nova.virt.libvirt.host [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.409 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.410 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:40:04Z,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='dd2e793599b6418881c391df7f71e0c6',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:40:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.410 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.410 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.410 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.410 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.411 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.411 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.411 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.411 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.411 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.411 189391 DEBUG nova.virt.hardware [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.415 189391 DEBUG nova.virt.libvirt.vif [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1281481769',display_name='tempest-TestNetworkBasicOps-server-1281481769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1281481769',id=13,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFKin8/XaNI4u/AYbm+AlTkBab4sekoAfGEYZ1xPAIyDCewt1Z3fL7r22TdbnxwwFN3eMieH8Zlh1I4XbYkvGH8E1RbG0Ttc70Iez5mBk4a8ExcMnExYK+II1qhMImhEbA==',key_name='tempest-TestNetworkBasicOps-1027657392',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41a6ffab20ee4735b3f190a1e087aed2',ramdisk_id='',reservation_id='r-mb0zqbim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1869958511',owner_user_name='tempest-TestNetworkBasicOps-1869958511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:43:06Z,user_data=None,user_id='6a001028c92e48d0b5914bef72937111',uuid=280c0e48-ae70-40a7-96ca-137efae9ea75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.415 189391 DEBUG nova.network.os_vif_util [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converting VIF {"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.416 189391 DEBUG nova.network.os_vif_util [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:df:c3,bridge_name='br-int',has_traffic_filtering=True,id=933bd457-0cc9-4849-a69f-0f02814a844a,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933bd457-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.418 189391 DEBUG nova.objects.instance [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 280c0e48-ae70-40a7-96ca-137efae9ea75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.433 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <uuid>280c0e48-ae70-40a7-96ca-137efae9ea75</uuid>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <name>instance-0000000d</name>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <nova:name>tempest-TestNetworkBasicOps-server-1281481769</nova:name>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:43:10</nova:creationTime>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:user uuid="6a001028c92e48d0b5914bef72937111">tempest-TestNetworkBasicOps-1869958511-project-member</nova:user>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:project uuid="41a6ffab20ee4735b3f190a1e087aed2">tempest-TestNetworkBasicOps-1869958511</nova:project>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        <nova:port uuid="933bd457-0cc9-4849-a69f-0f02814a844a">
Nov 26 23:43:10 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <system>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <entry name="serial">280c0e48-ae70-40a7-96ca-137efae9ea75</entry>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <entry name="uuid">280c0e48-ae70-40a7-96ca-137efae9ea75</entry>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </system>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <os>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  </os>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <features>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  </features>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk.config"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:35:df:c3"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <target dev="tap933bd457-0c"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/console.log" append="off"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <video>
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </video>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:43:10 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:43:10 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:43:10 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:43:10 compute-0 nova_compute[189387]: </domain>
Nov 26 23:43:10 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.434 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Preparing to wait for external event network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.435 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.435 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.435 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.437 189391 DEBUG nova.virt.libvirt.vif [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1281481769',display_name='tempest-TestNetworkBasicOps-server-1281481769',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1281481769',id=13,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFKin8/XaNI4u/AYbm+AlTkBab4sekoAfGEYZ1xPAIyDCewt1Z3fL7r22TdbnxwwFN3eMieH8Zlh1I4XbYkvGH8E1RbG0Ttc70Iez5mBk4a8ExcMnExYK+II1qhMImhEbA==',key_name='tempest-TestNetworkBasicOps-1027657392',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='41a6ffab20ee4735b3f190a1e087aed2',ramdisk_id='',reservation_id='r-mb0zqbim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1869958511',owner_user_name='tempest-TestNetworkBasicOps-1869958511-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:43:06Z,user_data=None,user_id='6a001028c92e48d0b5914bef72937111',uuid=280c0e48-ae70-40a7-96ca-137efae9ea75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.437 189391 DEBUG nova.network.os_vif_util [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converting VIF {"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.438 189391 DEBUG nova.network.os_vif_util [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:df:c3,bridge_name='br-int',has_traffic_filtering=True,id=933bd457-0cc9-4849-a69f-0f02814a844a,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933bd457-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.439 189391 DEBUG os_vif [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:df:c3,bridge_name='br-int',has_traffic_filtering=True,id=933bd457-0cc9-4849-a69f-0f02814a844a,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933bd457-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.440 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.440 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.441 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.446 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.447 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap933bd457-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.447 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap933bd457-0c, col_values=(('external_ids', {'iface-id': '933bd457-0cc9-4849-a69f-0f02814a844a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:df:c3', 'vm-uuid': '280c0e48-ae70-40a7-96ca-137efae9ea75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:10 compute-0 NetworkManager[56227]: <info>  [1764200590.4510] manager: (tap933bd457-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.449 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.454 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.460 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.462 189391 INFO os_vif [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:df:c3,bridge_name='br-int',has_traffic_filtering=True,id=933bd457-0cc9-4849-a69f-0f02814a844a,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933bd457-0c')#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.528 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.528 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.529 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] No VIF found with MAC fa:16:3e:35:df:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.529 189391 INFO nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Using config drive#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.615 189391 DEBUG nova.compute.manager [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-changed-933bd457-0cc9-4849-a69f-0f02814a844a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.615 189391 DEBUG nova.compute.manager [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Refreshing instance network info cache due to event network-changed-933bd457-0cc9-4849-a69f-0f02814a844a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.616 189391 DEBUG oslo_concurrency.lockutils [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.616 189391 DEBUG oslo_concurrency.lockutils [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.616 189391 DEBUG nova.network.neutron [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Refreshing network info cache for port 933bd457-0cc9-4849-a69f-0f02814a844a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:43:10 compute-0 nova_compute[189387]: 2025-11-26 23:43:10.644 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.001 189391 INFO nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Creating config drive at /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk.config#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.011 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1igffcz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:11 compute-0 ovn_controller[97697]: 2025-11-26T23:43:11Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:6c:8b 10.100.0.6
Nov 26 23:43:11 compute-0 ovn_controller[97697]: 2025-11-26T23:43:11Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:6c:8b 10.100.0.6
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.157 189391 DEBUG oslo_concurrency.processutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp1igffcz" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:11 compute-0 kernel: tap933bd457-0c: entered promiscuous mode
Nov 26 23:43:11 compute-0 NetworkManager[56227]: <info>  [1764200591.2733] manager: (tap933bd457-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Nov 26 23:43:11 compute-0 ovn_controller[97697]: 2025-11-26T23:43:11Z|00177|binding|INFO|Claiming lport 933bd457-0cc9-4849-a69f-0f02814a844a for this chassis.
Nov 26 23:43:11 compute-0 ovn_controller[97697]: 2025-11-26T23:43:11Z|00178|binding|INFO|933bd457-0cc9-4849-a69f-0f02814a844a: Claiming fa:16:3e:35:df:c3 10.100.0.7
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.280 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.286 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:df:c3 10.100.0.7'], port_security=['fa:16:3e:35:df:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '280c0e48-ae70-40a7-96ca-137efae9ea75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-865b8b48-3753-4a05-b614-ccecb1e87781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41a6ffab20ee4735b3f190a1e087aed2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f82289b5-273e-4d7e-9ac6-24bd2e2ecd7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5348c531-5047-446f-b828-c2a0486b273b, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=933bd457-0cc9-4849-a69f-0f02814a844a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.287 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 933bd457-0cc9-4849-a69f-0f02814a844a in datapath 865b8b48-3753-4a05-b614-ccecb1e87781 bound to our chassis#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.291 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 865b8b48-3753-4a05-b614-ccecb1e87781#033[00m
Nov 26 23:43:11 compute-0 ovn_controller[97697]: 2025-11-26T23:43:11Z|00179|binding|INFO|Setting lport 933bd457-0cc9-4849-a69f-0f02814a844a ovn-installed in OVS
Nov 26 23:43:11 compute-0 ovn_controller[97697]: 2025-11-26T23:43:11Z|00180|binding|INFO|Setting lport 933bd457-0cc9-4849-a69f-0f02814a844a up in Southbound
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.316 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:11 compute-0 systemd-udevd[252064]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.322 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.332 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[899fac4a-0e84-4787-b7a4-6b2cf407bb69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:11 compute-0 NetworkManager[56227]: <info>  [1764200591.3443] device (tap933bd457-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:43:11 compute-0 NetworkManager[56227]: <info>  [1764200591.3454] device (tap933bd457-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:43:11 compute-0 systemd-machined[155674]: New machine qemu-13-instance-0000000d.
Nov 26 23:43:11 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.366 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b36522-7d19-47b6-8383-32a79eb86aac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.371 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[98b0ad4d-426e-4946-b35a-700ac32ad527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.418 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f7f13a12-622c-4aac-bd14-02f7a8ab515c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.444 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[992b359d-89cb-4241-99ce-31301240ede9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap865b8b48-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:94:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520908, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252080, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.468 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[e8b3fa4a-deba-45cd-a735-ceb982338402]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap865b8b48-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520919, 'tstamp': 520919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252082, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap865b8b48-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520922, 'tstamp': 520922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252082, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.470 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap865b8b48-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.472 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.474 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.475 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap865b8b48-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.475 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.476 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap865b8b48-30, col_values=(('external_ids', {'iface-id': '9bcac48d-895a-4cd4-ba63-78258e9255b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:11 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:11.477 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.779 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200591.778993, 280c0e48-ae70-40a7-96ca-137efae9ea75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.780 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] VM Started (Lifecycle Event)#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.801 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.814 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200591.7820709, 280c0e48-ae70-40a7-96ca-137efae9ea75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.814 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.848 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.855 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:43:11 compute-0 nova_compute[189387]: 2025-11-26 23:43:11.884 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:43:12 compute-0 nova_compute[189387]: 2025-11-26 23:43:12.833 189391 DEBUG nova.network.neutron [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Updated VIF entry in instance network info cache for port 933bd457-0cc9-4849-a69f-0f02814a844a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:43:12 compute-0 nova_compute[189387]: 2025-11-26 23:43:12.834 189391 DEBUG nova.network.neutron [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Updating instance_info_cache with network_info: [{"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:12 compute-0 nova_compute[189387]: 2025-11-26 23:43:12.854 189391 DEBUG oslo_concurrency.lockutils [req-1219071a-a76c-4934-8880-d13a899457c8 req-78bb004b-4f83-4517-b439-391d1e576ca1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:12 compute-0 podman[252090]: 2025-11-26 23:43:12.905980192 +0000 UTC m=+0.180088836 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:43:13 compute-0 ovn_controller[97697]: 2025-11-26T23:43:13Z|00181|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:43:13 compute-0 ovn_controller[97697]: 2025-11-26T23:43:13Z|00182|binding|INFO|Releasing lport 9bcac48d-895a-4cd4-ba63-78258e9255b2 from this chassis (sb_readonly=0)
Nov 26 23:43:13 compute-0 ovn_controller[97697]: 2025-11-26T23:43:13Z|00183|binding|INFO|Releasing lport 5a5b3695-2a05-4fd3-bc2b-35e2893ba4c1 from this chassis (sb_readonly=0)
Nov 26 23:43:13 compute-0 nova_compute[189387]: 2025-11-26 23:43:13.122 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.144 189391 DEBUG nova.compute.manager [req-2463899e-d9d3-4009-8802-eda68f442bff req-6da449f5-a5f8-473d-af3e-aba09378f1fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.144 189391 DEBUG oslo_concurrency.lockutils [req-2463899e-d9d3-4009-8802-eda68f442bff req-6da449f5-a5f8-473d-af3e-aba09378f1fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.145 189391 DEBUG oslo_concurrency.lockutils [req-2463899e-d9d3-4009-8802-eda68f442bff req-6da449f5-a5f8-473d-af3e-aba09378f1fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.145 189391 DEBUG oslo_concurrency.lockutils [req-2463899e-d9d3-4009-8802-eda68f442bff req-6da449f5-a5f8-473d-af3e-aba09378f1fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.145 189391 DEBUG nova.compute.manager [req-2463899e-d9d3-4009-8802-eda68f442bff req-6da449f5-a5f8-473d-af3e-aba09378f1fe f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Processing event network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.146 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.154 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200594.1531923, 280c0e48-ae70-40a7-96ca-137efae9ea75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.155 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.158 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.171 189391 INFO nova.virt.libvirt.driver [-] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Instance spawned successfully.#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.171 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.181 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.195 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.200 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.201 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.202 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.202 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.203 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.204 189391 DEBUG nova.virt.libvirt.driver [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.233 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.269 189391 INFO nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Took 7.55 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.269 189391 DEBUG nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.338 189391 INFO nova.compute.manager [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Took 8.29 seconds to build instance.#033[00m
Nov 26 23:43:14 compute-0 nova_compute[189387]: 2025-11-26 23:43:14.357 189391 DEBUG oslo_concurrency.lockutils [None req-97f1178e-07bd-49fb-8f9b-0298c041d7a2 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.387s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:14 compute-0 podman[252116]: 2025-11-26 23:43:14.833280072 +0000 UTC m=+0.120330005 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, vcs-type=git, architecture=x86_64)
Nov 26 23:43:14 compute-0 podman[252129]: 2025-11-26 23:43:14.849562186 +0000 UTC m=+0.099153861 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_id=edpm, managed_by=edpm_ansible, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Nov 26 23:43:14 compute-0 podman[252117]: 2025-11-26 23:43:14.855131574 +0000 UTC m=+0.122954175 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:43:14 compute-0 podman[252123]: 2025-11-26 23:43:14.878173987 +0000 UTC m=+0.140191533 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 26 23:43:14 compute-0 podman[252124]: 2025-11-26 23:43:14.886776156 +0000 UTC m=+0.120709115 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 23:43:15 compute-0 nova_compute[189387]: 2025-11-26 23:43:15.451 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:15 compute-0 nova_compute[189387]: 2025-11-26 23:43:15.649 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:16 compute-0 nova_compute[189387]: 2025-11-26 23:43:16.313 189391 DEBUG nova.compute.manager [req-a7a69b4e-9e28-4883-acc2-c76904d9411d req-2b35b81f-8034-4b52-8ef5-abf8e68fa730 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:16 compute-0 nova_compute[189387]: 2025-11-26 23:43:16.313 189391 DEBUG oslo_concurrency.lockutils [req-a7a69b4e-9e28-4883-acc2-c76904d9411d req-2b35b81f-8034-4b52-8ef5-abf8e68fa730 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:16 compute-0 nova_compute[189387]: 2025-11-26 23:43:16.314 189391 DEBUG oslo_concurrency.lockutils [req-a7a69b4e-9e28-4883-acc2-c76904d9411d req-2b35b81f-8034-4b52-8ef5-abf8e68fa730 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:16 compute-0 nova_compute[189387]: 2025-11-26 23:43:16.314 189391 DEBUG oslo_concurrency.lockutils [req-a7a69b4e-9e28-4883-acc2-c76904d9411d req-2b35b81f-8034-4b52-8ef5-abf8e68fa730 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:16 compute-0 nova_compute[189387]: 2025-11-26 23:43:16.315 189391 DEBUG nova.compute.manager [req-a7a69b4e-9e28-4883-acc2-c76904d9411d req-2b35b81f-8034-4b52-8ef5-abf8e68fa730 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] No waiting events found dispatching network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:16 compute-0 nova_compute[189387]: 2025-11-26 23:43:16.316 189391 WARNING nova.compute.manager [req-a7a69b4e-9e28-4883-acc2-c76904d9411d req-2b35b81f-8034-4b52-8ef5-abf8e68fa730 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received unexpected event network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a for instance with vm_state active and task_state None.#033[00m
Nov 26 23:43:18 compute-0 nova_compute[189387]: 2025-11-26 23:43:18.533 189391 DEBUG nova.compute.manager [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-changed-933bd457-0cc9-4849-a69f-0f02814a844a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:18 compute-0 nova_compute[189387]: 2025-11-26 23:43:18.534 189391 DEBUG nova.compute.manager [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Refreshing instance network info cache due to event network-changed-933bd457-0cc9-4849-a69f-0f02814a844a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:43:18 compute-0 nova_compute[189387]: 2025-11-26 23:43:18.534 189391 DEBUG oslo_concurrency.lockutils [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:43:18 compute-0 nova_compute[189387]: 2025-11-26 23:43:18.534 189391 DEBUG oslo_concurrency.lockutils [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:43:18 compute-0 nova_compute[189387]: 2025-11-26 23:43:18.535 189391 DEBUG nova.network.neutron [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Refreshing network info cache for port 933bd457-0cc9-4849-a69f-0f02814a844a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:43:20 compute-0 nova_compute[189387]: 2025-11-26 23:43:20.455 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:20 compute-0 nova_compute[189387]: 2025-11-26 23:43:20.653 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:21 compute-0 nova_compute[189387]: 2025-11-26 23:43:21.250 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200586.2486262, e6b6d3cd-7df5-455b-a9eb-8209c97d3d26 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:21 compute-0 nova_compute[189387]: 2025-11-26 23:43:21.250 189391 INFO nova.compute.manager [-] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:43:21 compute-0 nova_compute[189387]: 2025-11-26 23:43:21.268 189391 DEBUG nova.compute.manager [None req-7983f787-4951-4a36-95f5-cd6cd0e635f2 - - - - - -] [instance: e6b6d3cd-7df5-455b-a9eb-8209c97d3d26] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:21 compute-0 nova_compute[189387]: 2025-11-26 23:43:21.929 189391 DEBUG nova.network.neutron [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Updated VIF entry in instance network info cache for port 933bd457-0cc9-4849-a69f-0f02814a844a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:43:21 compute-0 nova_compute[189387]: 2025-11-26 23:43:21.930 189391 DEBUG nova.network.neutron [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Updating instance_info_cache with network_info: [{"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:21 compute-0 nova_compute[189387]: 2025-11-26 23:43:21.964 189391 DEBUG oslo_concurrency.lockutils [req-99ed3a30-7554-4d89-b42c-7b108dbb369a req-175523fc-7f54-4a0b-a16e-65751c63d18a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-280c0e48-ae70-40a7-96ca-137efae9ea75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:23 compute-0 podman[252210]: 2025-11-26 23:43:23.871003068 +0000 UTC m=+0.152649245 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 26 23:43:25 compute-0 nova_compute[189387]: 2025-11-26 23:43:25.459 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:25 compute-0 nova_compute[189387]: 2025-11-26 23:43:25.655 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:27 compute-0 podman[252230]: 2025-11-26 23:43:27.798765722 +0000 UTC m=+0.097998594 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:43:29 compute-0 podman[203621]: time="2025-11-26T23:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:43:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31988 "" "Go-http-client/1.1"
Nov 26 23:43:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5737 "" "Go-http-client/1.1"
Nov 26 23:43:30 compute-0 nova_compute[189387]: 2025-11-26 23:43:30.319 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:30 compute-0 nova_compute[189387]: 2025-11-26 23:43:30.462 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:30 compute-0 nova_compute[189387]: 2025-11-26 23:43:30.658 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:31 compute-0 openstack_network_exporter[205787]: ERROR   23:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:43:31 compute-0 openstack_network_exporter[205787]: ERROR   23:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:43:31 compute-0 openstack_network_exporter[205787]: ERROR   23:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:43:31 compute-0 openstack_network_exporter[205787]: ERROR   23:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:43:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:43:31 compute-0 openstack_network_exporter[205787]: ERROR   23:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:43:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:43:34 compute-0 podman[252254]: 2025-11-26 23:43:34.858179834 +0000 UTC m=+0.136221421 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:43:35 compute-0 nova_compute[189387]: 2025-11-26 23:43:35.049 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:35.391 106703 DEBUG eventlet.wsgi.server [-] (106703) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:35.393 106703 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: Accept: */*#015
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: Connection: close#015
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: Content-Type: text/plain#015
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: Host: 169.254.169.254#015
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: User-Agent: curl/7.84.0#015
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: X-Forwarded-For: 10.100.0.5#015
Nov 26 23:43:35 compute-0 ovn_metadata_agent[106590]: X-Ovn-Network-Id: 3f903c92-a599-4991-906d-3ed8e3e8eabd __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 26 23:43:35 compute-0 nova_compute[189387]: 2025-11-26 23:43:35.465 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:35 compute-0 nova_compute[189387]: 2025-11-26 23:43:35.831 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:36.733 106703 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:36.734 106703 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.3412051#033[00m
Nov 26 23:43:36 compute-0 haproxy-metadata-proxy-3f903c92-a599-4991-906d-3ed8e3e8eabd[251319]: 10.100.0.5:42142 [26/Nov/2025:23:43:35.390] listener listener/metadata 0/0/0/1344/1344 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:36.882 106703 DEBUG eventlet.wsgi.server [-] (106703) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:36.883 106703 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: Accept: */*#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: Connection: close#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: Content-Length: 100#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: Content-Type: application/x-www-form-urlencoded#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: Host: 169.254.169.254#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: User-Agent: curl/7.84.0#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: X-Forwarded-For: 10.100.0.5#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: X-Ovn-Network-Id: 3f903c92-a599-4991-906d-3ed8e3e8eabd#015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: #015
Nov 26 23:43:36 compute-0 ovn_metadata_agent[106590]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 26 23:43:37 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:37.016 106703 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 26 23:43:37 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:37.017 106703 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.1343429#033[00m
Nov 26 23:43:37 compute-0 haproxy-metadata-proxy-3f903c92-a599-4991-906d-3ed8e3e8eabd[251319]: 10.100.0.5:42156 [26/Nov/2025:23:43:36.880] listener listener/metadata 0/0/0/137/137 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.161 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.163 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.164 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.165 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.166 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.169 189391 INFO nova.compute.manager [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Terminating instance#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.171 189391 DEBUG nova.compute.manager [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:43:39 compute-0 kernel: tapb298dc50-93 (unregistering): left promiscuous mode
Nov 26 23:43:39 compute-0 NetworkManager[56227]: <info>  [1764200619.2181] device (tapb298dc50-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:43:39 compute-0 ovn_controller[97697]: 2025-11-26T23:43:39Z|00184|binding|INFO|Releasing lport b298dc50-93b6-439e-8c42-b9795220b150 from this chassis (sb_readonly=0)
Nov 26 23:43:39 compute-0 ovn_controller[97697]: 2025-11-26T23:43:39Z|00185|binding|INFO|Setting lport b298dc50-93b6-439e-8c42-b9795220b150 down in Southbound
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.253 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 ovn_controller[97697]: 2025-11-26T23:43:39Z|00186|binding|INFO|Removing iface tapb298dc50-93 ovn-installed in OVS
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.260 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.264 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:71:58 10.100.0.5'], port_security=['fa:16:3e:77:71:58 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '8c6c2d42-56ca-46f9-a12a-54c84adf5dbd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75af4c8383fc485a90ab9085bbabf0f8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '929860e4-b70e-4cb4-804a-81241a8ff3a6 e608f18f-4caf-4bf6-b81d-d3068f814eda', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.221'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c7386bcd-ad4e-45fd-95d3-be817d33b89f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=b298dc50-93b6-439e-8c42-b9795220b150) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.266 106595 INFO neutron.agent.ovn.metadata.agent [-] Port b298dc50-93b6-439e-8c42-b9795220b150 in datapath 3f903c92-a599-4991-906d-3ed8e3e8eabd unbound from our chassis#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.271 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f903c92-a599-4991-906d-3ed8e3e8eabd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.274 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[aad8d429-be1f-455f-9fd6-b46d208dfca1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.275 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd namespace which is not needed anymore#033[00m
Nov 26 23:43:39 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 26 23:43:39 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 43.464s CPU time.
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.282 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 systemd-machined[155674]: Machine qemu-10-instance-0000000a terminated.
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.408 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 ovn_controller[97697]: 2025-11-26T23:43:39Z|00187|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:43:39 compute-0 ovn_controller[97697]: 2025-11-26T23:43:39Z|00188|binding|INFO|Releasing lport 9bcac48d-895a-4cd4-ba63-78258e9255b2 from this chassis (sb_readonly=0)
Nov 26 23:43:39 compute-0 ovn_controller[97697]: 2025-11-26T23:43:39Z|00189|binding|INFO|Releasing lport 5a5b3695-2a05-4fd3-bc2b-35e2893ba4c1 from this chassis (sb_readonly=0)
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.468 189391 INFO nova.virt.libvirt.driver [-] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Instance destroyed successfully.#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.469 189391 DEBUG nova.objects.instance [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lazy-loading 'resources' on Instance uuid 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.484 189391 DEBUG nova.virt.libvirt.vif [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1593775238',display_name='tempest-TestServerBasicOps-server-1593775238',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1593775238',id=10,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAW6WLuEWeJ3uGhCOJvpZEYHtUsyu3kMo+zjCf77nj/CKShEF5RM77Qbj9w2/a63wSpqxs7HM2PI7A3+mwx/astLsUFGUKpowR2wdWBKmdSPy3reaD8i1gUwpy4qqUlH6Q==',key_name='tempest-TestServerBasicOps-14952678',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:42:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75af4c8383fc485a90ab9085bbabf0f8',ramdisk_id='',reservation_id='r-ecdm7s5e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-326940996',owner_user_name='tempest-TestServerBasicOps-326940996-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:43:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a4055ba44a1948148b34c151da34f6e3',uuid=8c6c2d42-56ca-46f9-a12a-54c84adf5dbd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.485 189391 DEBUG nova.network.os_vif_util [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Converting VIF {"id": "b298dc50-93b6-439e-8c42-b9795220b150", "address": "fa:16:3e:77:71:58", "network": {"id": "3f903c92-a599-4991-906d-3ed8e3e8eabd", "bridge": "br-int", "label": "tempest-TestServerBasicOps-2000708722-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75af4c8383fc485a90ab9085bbabf0f8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb298dc50-93", "ovs_interfaceid": "b298dc50-93b6-439e-8c42-b9795220b150", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.487 189391 DEBUG nova.network.os_vif_util [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:71:58,bridge_name='br-int',has_traffic_filtering=True,id=b298dc50-93b6-439e-8c42-b9795220b150,network=Network(3f903c92-a599-4991-906d-3ed8e3e8eabd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb298dc50-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.487 189391 DEBUG os_vif [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:71:58,bridge_name='br-int',has_traffic_filtering=True,id=b298dc50-93b6-439e-8c42-b9795220b150,network=Network(3f903c92-a599-4991-906d-3ed8e3e8eabd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb298dc50-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.490 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.490 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb298dc50-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.493 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.495 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.519 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.522 189391 INFO os_vif [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:71:58,bridge_name='br-int',has_traffic_filtering=True,id=b298dc50-93b6-439e-8c42-b9795220b150,network=Network(3f903c92-a599-4991-906d-3ed8e3e8eabd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb298dc50-93')#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.523 189391 INFO nova.virt.libvirt.driver [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Deleting instance files /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd_del#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.525 189391 INFO nova.virt.libvirt.driver [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Deletion of /var/lib/nova/instances/8c6c2d42-56ca-46f9-a12a-54c84adf5dbd_del complete#033[00m
Nov 26 23:43:39 compute-0 neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd[251312]: [NOTICE]   (251317) : haproxy version is 2.8.14-c23fe91
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.532 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd[251312]: [NOTICE]   (251317) : path to executable is /usr/sbin/haproxy
Nov 26 23:43:39 compute-0 neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd[251312]: [WARNING]  (251317) : Exiting Master process...
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.535 189391 DEBUG nova.compute.manager [req-f745b62f-16bb-4f27-893f-6197340cfa30 req-e5c9a806-349f-4bfa-8f62-351645f27131 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-vif-unplugged-b298dc50-93b6-439e-8c42-b9795220b150 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.535 189391 DEBUG oslo_concurrency.lockutils [req-f745b62f-16bb-4f27-893f-6197340cfa30 req-e5c9a806-349f-4bfa-8f62-351645f27131 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.535 189391 DEBUG oslo_concurrency.lockutils [req-f745b62f-16bb-4f27-893f-6197340cfa30 req-e5c9a806-349f-4bfa-8f62-351645f27131 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.535 189391 DEBUG oslo_concurrency.lockutils [req-f745b62f-16bb-4f27-893f-6197340cfa30 req-e5c9a806-349f-4bfa-8f62-351645f27131 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.536 189391 DEBUG nova.compute.manager [req-f745b62f-16bb-4f27-893f-6197340cfa30 req-e5c9a806-349f-4bfa-8f62-351645f27131 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] No waiting events found dispatching network-vif-unplugged-b298dc50-93b6-439e-8c42-b9795220b150 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.536 189391 DEBUG nova.compute.manager [req-f745b62f-16bb-4f27-893f-6197340cfa30 req-e5c9a806-349f-4bfa-8f62-351645f27131 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-vif-unplugged-b298dc50-93b6-439e-8c42-b9795220b150 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:43:39 compute-0 neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd[251312]: [ALERT]    (251317) : Current worker (251319) exited with code 143 (Terminated)
Nov 26 23:43:39 compute-0 neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd[251312]: [WARNING]  (251317) : All workers exited. Exiting... (0)
Nov 26 23:43:39 compute-0 systemd[1]: libpod-4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359.scope: Deactivated successfully.
Nov 26 23:43:39 compute-0 podman[252306]: 2025-11-26 23:43:39.545983411 +0000 UTC m=+0.089253104 container died 4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.582 189391 INFO nova.compute.manager [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.583 189391 DEBUG oslo.service.loopingcall [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.583 189391 DEBUG nova.compute.manager [-] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.584 189391 DEBUG nova.network.neutron [-] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359-userdata-shm.mount: Deactivated successfully.
Nov 26 23:43:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-79591e0d8d81ca6dbf911e2c575d313c692a44d19d86d5bcb63dbf444961091a-merged.mount: Deactivated successfully.
Nov 26 23:43:39 compute-0 podman[252306]: 2025-11-26 23:43:39.616922464 +0000 UTC m=+0.160192157 container cleanup 4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 26 23:43:39 compute-0 systemd[1]: libpod-conmon-4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359.scope: Deactivated successfully.
Nov 26 23:43:39 compute-0 podman[252337]: 2025-11-26 23:43:39.711310738 +0000 UTC m=+0.059734226 container remove 4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.739 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8e9511-11d4-446f-ab0a-71a01cc9b82f]: (4, ('Wed Nov 26 11:43:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd (4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359)\n4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359\nWed Nov 26 11:43:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd (4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359)\n4e9b708c5ab6d70f2f44548185c284e68326eb7a55e150f6376d2732ad68b359\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.742 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6a76c5-39be-4fb6-83de-5b2092f7f0c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.743 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f903c92-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:39 compute-0 kernel: tap3f903c92-a0: left promiscuous mode
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.753 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[857b7d00-c645-4ba3-8884-299795d7dfa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.768 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.776 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[b197f48b-f7e4-487b-a0ad-e8d95f6bc588]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.779 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f049600e-6eb9-48e7-8473-1bc2f158a9dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:39 compute-0 nova_compute[189387]: 2025-11-26 23:43:39.780 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.802 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[a60605af-9c54-4067-9061-a14374523c42]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522710, 'reachable_time': 42470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252350, 'error': None, 'target': 'ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.806 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f903c92-a599-4991-906d-3ed8e3e8eabd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:43:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f903c92\x2da599\x2d4991\x2d906d\x2d3ed8e3e8eabd.mount: Deactivated successfully.
Nov 26 23:43:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:39.807 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[7b034405-175b-46b4-bf4a-26f854b8723e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:40 compute-0 nova_compute[189387]: 2025-11-26 23:43:40.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:40 compute-0 nova_compute[189387]: 2025-11-26 23:43:40.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:43:40 compute-0 nova_compute[189387]: 2025-11-26 23:43:40.154 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9907#033[00m
Nov 26 23:43:40 compute-0 nova_compute[189387]: 2025-11-26 23:43:40.449 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:43:40 compute-0 nova_compute[189387]: 2025-11-26 23:43:40.451 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:43:40 compute-0 nova_compute[189387]: 2025-11-26 23:43:40.452 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:43:40 compute-0 nova_compute[189387]: 2025-11-26 23:43:40.666 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.078 189391 DEBUG nova.network.neutron [-] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.094 189391 INFO nova.compute.manager [-] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Took 1.51 seconds to deallocate network for instance.#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.136 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.137 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.247 189391 DEBUG nova.compute.provider_tree [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.261 189391 DEBUG nova.scheduler.client.report [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.280 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.311 189391 INFO nova.scheduler.client.report [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Deleted allocations for instance 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.378 189391 DEBUG oslo_concurrency.lockutils [None req-9600cf0a-7bcd-420e-9469-7cc879ad8c00 a4055ba44a1948148b34c151da34f6e3 75af4c8383fc485a90ab9085bbabf0f8 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.702 189391 DEBUG nova.compute.manager [req-9ed86feb-3c1b-4647-a724-00c05578ba75 req-76e81f36-daeb-4c0c-aa37-db0d52e86122 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.704 189391 DEBUG oslo_concurrency.lockutils [req-9ed86feb-3c1b-4647-a724-00c05578ba75 req-76e81f36-daeb-4c0c-aa37-db0d52e86122 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.705 189391 DEBUG oslo_concurrency.lockutils [req-9ed86feb-3c1b-4647-a724-00c05578ba75 req-76e81f36-daeb-4c0c-aa37-db0d52e86122 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.706 189391 DEBUG oslo_concurrency.lockutils [req-9ed86feb-3c1b-4647-a724-00c05578ba75 req-76e81f36-daeb-4c0c-aa37-db0d52e86122 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "8c6c2d42-56ca-46f9-a12a-54c84adf5dbd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.707 189391 DEBUG nova.compute.manager [req-9ed86feb-3c1b-4647-a724-00c05578ba75 req-76e81f36-daeb-4c0c-aa37-db0d52e86122 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] No waiting events found dispatching network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.708 189391 WARNING nova.compute.manager [req-9ed86feb-3c1b-4647-a724-00c05578ba75 req-76e81f36-daeb-4c0c-aa37-db0d52e86122 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received unexpected event network-vif-plugged-b298dc50-93b6-439e-8c42-b9795220b150 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:43:41 compute-0 nova_compute[189387]: 2025-11-26 23:43:41.709 189391 DEBUG nova.compute.manager [req-9ed86feb-3c1b-4647-a724-00c05578ba75 req-76e81f36-daeb-4c0c-aa37-db0d52e86122 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Received event network-vif-deleted-b298dc50-93b6-439e-8c42-b9795220b150 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:42 compute-0 nova_compute[189387]: 2025-11-26 23:43:42.135 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updating instance_info_cache with network_info: [{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:42 compute-0 nova_compute[189387]: 2025-11-26 23:43:42.189 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:42 compute-0 nova_compute[189387]: 2025-11-26 23:43:42.190 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:43:43 compute-0 podman[252351]: 2025-11-26 23:43:43.916239996 +0000 UTC m=+0.196319877 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.106 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.107 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.108 189391 INFO nova.compute.manager [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Rebooting instance#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.134 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.135 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquired lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.135 189391 DEBUG nova.network.neutron [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:43:44 compute-0 nova_compute[189387]: 2025-11-26 23:43:44.493 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.149 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.150 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.151 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.152 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.274 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:45 compute-0 podman[252379]: 2025-11-26 23:43:45.301454102 +0000 UTC m=+0.088711930 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 26 23:43:45 compute-0 podman[252378]: 2025-11-26 23:43:45.301868313 +0000 UTC m=+0.091728532 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:43:45 compute-0 podman[252380]: 2025-11-26 23:43:45.33060073 +0000 UTC m=+0.102824837 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 23:43:45 compute-0 podman[252376]: 2025-11-26 23:43:45.332623545 +0000 UTC m=+0.122065593 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 23:43:45 compute-0 podman[252381]: 2025-11-26 23:43:45.334560489 +0000 UTC m=+0.102654543 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-type=git, com.redhat.component=ubi9-minimal-container, architecture=x86_64)
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.341 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.342 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.408 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.429 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.512 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.513 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.589 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.598 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.656 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.657 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.670 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:45 compute-0 nova_compute[189387]: 2025-11-26 23:43:45.719 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.149 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.150 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4844MB free_disk=72.28268814086914GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.150 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.151 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.439 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance cf0578c2-8c80-4b7e-a866-a753553c6f9e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.439 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.439 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 280c0e48-ae70-40a7-96ca-137efae9ea75 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.440 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.440 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.455 189391 DEBUG nova.network.neutron [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updating instance_info_cache with network_info: [{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.527 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.597 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Releasing lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.598 189391 DEBUG nova.compute.manager [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.609 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.637 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.637 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:46 compute-0 kernel: tap798557c8-33 (unregistering): left promiscuous mode
Nov 26 23:43:46 compute-0 NetworkManager[56227]: <info>  [1764200626.7388] device (tap798557c8-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.749 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:46 compute-0 ovn_controller[97697]: 2025-11-26T23:43:46Z|00190|binding|INFO|Releasing lport 798557c8-33b8-48fa-ba80-092115a6af38 from this chassis (sb_readonly=0)
Nov 26 23:43:46 compute-0 ovn_controller[97697]: 2025-11-26T23:43:46Z|00191|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 down in Southbound
Nov 26 23:43:46 compute-0 ovn_controller[97697]: 2025-11-26T23:43:46Z|00192|binding|INFO|Removing iface tap798557c8-33 ovn-installed in OVS
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.760 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:46.765 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:6c:8b 10.100.0.6'], port_security=['fa:16:3e:56:6c:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4dbe9fb4-ed7b-48b4-a9c5-2b96bb554e51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0599c7c-1f2c-4f1e-9216-c20a57ddeefa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=798557c8-33b8-48fa-ba80-092115a6af38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:46.767 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 798557c8-33b8-48fa-ba80-092115a6af38 in datapath d6f23c8c-9266-4c49-bc94-0b9f021c07c2 unbound from our chassis#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.770 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:46.771 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6f23c8c-9266-4c49-bc94-0b9f021c07c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:43:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:46.772 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[fbeeb1bc-49e4-4985-80ee-5712cb145968]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:46.774 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 namespace which is not needed anymore#033[00m
Nov 26 23:43:46 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 26 23:43:46 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 43.269s CPU time.
Nov 26 23:43:46 compute-0 systemd-machined[155674]: Machine qemu-11-instance-0000000b terminated.
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.918 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.926 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.959 189391 INFO nova.virt.libvirt.driver [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance destroyed successfully.#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.960 189391 DEBUG nova.objects.instance [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'resources' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:46 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[251571]: [NOTICE]   (251575) : haproxy version is 2.8.14-c23fe91
Nov 26 23:43:46 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[251571]: [NOTICE]   (251575) : path to executable is /usr/sbin/haproxy
Nov 26 23:43:46 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[251571]: [WARNING]  (251575) : Exiting Master process...
Nov 26 23:43:46 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[251571]: [ALERT]    (251575) : Current worker (251577) exited with code 143 (Terminated)
Nov 26 23:43:46 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[251571]: [WARNING]  (251575) : All workers exited. Exiting... (0)
Nov 26 23:43:46 compute-0 systemd[1]: libpod-11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7.scope: Deactivated successfully.
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.998 189391 DEBUG nova.virt.libvirt.vif [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-317216903',display_name='tempest-ServerActionsTestJSON-server-317216903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-317216903',id=11,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALDEq66uSnbDCnaPr9NW6WSucskLbrov7y7Lw8g6HLIB9MX0FvV85vzt5NxWgQHUlHzOWK54yVo80owjUx7VTSNbmpWR1rSDduj9dcSmqSox75C4uo2VseotetFpoaEEg==',key_name='tempest-keypair-1149430954',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:42:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5cd62a5ad724aed83d939e3ba6d7fd7',ramdisk_id='',reservation_id='r-a5ssvw5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1783347258',owner_user_name='tempest-ServerActionsTestJSON-1783347258-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:43:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3753fb1a520b4e088ce6979db5ae3773',uuid=2b8e8c61-3efb-436e-87b5-35ac9fe60d69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:43:46 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.998 189391 DEBUG nova.network.os_vif_util [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converting VIF {"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:46.999 189391 DEBUG nova.network.os_vif_util [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.000 189391 DEBUG os_vif [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.001 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.002 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap798557c8-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:47 compute-0 podman[252514]: 2025-11-26 23:43:47.003710989 +0000 UTC m=+0.088466403 container died 11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.005 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.010 189391 INFO os_vif [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33')#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.019 189391 DEBUG nova.virt.libvirt.driver [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Start _get_guest_xml network_info=[{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.028 189391 WARNING nova.virt.libvirt.driver [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.035 189391 DEBUG nova.virt.libvirt.host [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.036 189391 DEBUG nova.virt.libvirt.host [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.041 189391 DEBUG nova.virt.libvirt.host [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.042 189391 DEBUG nova.virt.libvirt.host [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.043 189391 DEBUG nova.virt.libvirt.driver [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.043 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=948c6d5b-0d46-4aec-8649-b6cdcb1a5694,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.045 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.045 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.045 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.046 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.046 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.047 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.047 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.047 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.048 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.048 189391 DEBUG nova.virt.hardware [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.049 189391 DEBUG nova.objects.instance [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7-userdata-shm.mount: Deactivated successfully.
Nov 26 23:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-61d44b59abc547f2c8918ffa04f8d496aa4c22c9ff91dc1a62123982e319499b-merged.mount: Deactivated successfully.
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.056 189391 DEBUG nova.compute.manager [req-e2d3c38e-148d-4716-83d2-f6e54561d905 req-0cc2ab72-efda-4840-87b4-7a5dcfbf26f2 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-unplugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.057 189391 DEBUG oslo_concurrency.lockutils [req-e2d3c38e-148d-4716-83d2-f6e54561d905 req-0cc2ab72-efda-4840-87b4-7a5dcfbf26f2 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.057 189391 DEBUG oslo_concurrency.lockutils [req-e2d3c38e-148d-4716-83d2-f6e54561d905 req-0cc2ab72-efda-4840-87b4-7a5dcfbf26f2 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.058 189391 DEBUG oslo_concurrency.lockutils [req-e2d3c38e-148d-4716-83d2-f6e54561d905 req-0cc2ab72-efda-4840-87b4-7a5dcfbf26f2 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.058 189391 DEBUG nova.compute.manager [req-e2d3c38e-148d-4716-83d2-f6e54561d905 req-0cc2ab72-efda-4840-87b4-7a5dcfbf26f2 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-unplugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.058 189391 WARNING nova.compute.manager [req-e2d3c38e-148d-4716-83d2-f6e54561d905 req-0cc2ab72-efda-4840-87b4-7a5dcfbf26f2 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-unplugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Nov 26 23:43:47 compute-0 podman[252514]: 2025-11-26 23:43:47.064477812 +0000 UTC m=+0.149233196 container cleanup 11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.066 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:47 compute-0 systemd[1]: libpod-conmon-11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7.scope: Deactivated successfully.
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.144 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.config --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.146 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.146 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.147 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.149 189391 DEBUG nova.virt.libvirt.vif [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-317216903',display_name='tempest-ServerActionsTestJSON-server-317216903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-317216903',id=11,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALDEq66uSnbDCnaPr9NW6WSucskLbrov7y7Lw8g6HLIB9MX0FvV85vzt5NxWgQHUlHzOWK54yVo80owjUx7VTSNbmpWR1rSDduj9dcSmqSox75C4uo2VseotetFpoaEEg==',key_name='tempest-keypair-1149430954',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:42:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5cd62a5ad724aed83d939e3ba6d7fd7',ramdisk_id='',reservation_id='r-a5ssvw5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1783347258',owner_user_name='tempest-ServerActionsTestJSON-1783347258-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:43:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3753fb1a520b4e088ce6979db5ae3773',uuid=2b8e8c61-3efb-436e-87b5-35ac9fe60d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.149 189391 DEBUG nova.network.os_vif_util [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converting VIF {"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.151 189391 DEBUG nova.network.os_vif_util [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.152 189391 DEBUG nova.objects.instance [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:47 compute-0 podman[252559]: 2025-11-26 23:43:47.162169397 +0000 UTC m=+0.063327285 container remove 11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.167 189391 DEBUG nova.virt.libvirt.driver [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <uuid>2b8e8c61-3efb-436e-87b5-35ac9fe60d69</uuid>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <name>instance-0000000b</name>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <nova:name>tempest-ServerActionsTestJSON-server-317216903</nova:name>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:43:47</nova:creationTime>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:user uuid="3753fb1a520b4e088ce6979db5ae3773">tempest-ServerActionsTestJSON-1783347258-project-member</nova:user>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:project uuid="b5cd62a5ad724aed83d939e3ba6d7fd7">tempest-ServerActionsTestJSON-1783347258</nova:project>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="948c6d5b-0d46-4aec-8649-b6cdcb1a5694"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        <nova:port uuid="798557c8-33b8-48fa-ba80-092115a6af38">
Nov 26 23:43:47 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <system>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <entry name="serial">2b8e8c61-3efb-436e-87b5-35ac9fe60d69</entry>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <entry name="uuid">2b8e8c61-3efb-436e-87b5-35ac9fe60d69</entry>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </system>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <os>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  </os>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <features>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  </features>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.config"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:56:6c:8b"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <target dev="tap798557c8-33"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/console.log" append="off"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <video>
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </video>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <input type="keyboard" bus="usb"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:43:47 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:43:47 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:43:47 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:43:47 compute-0 nova_compute[189387]: </domain>
Nov 26 23:43:47 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.168 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.176 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c21e1b90-a25e-4000-a76c-b419d786595a]: (4, ('Wed Nov 26 11:43:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 (11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7)\n11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7\nWed Nov 26 11:43:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 (11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7)\n11ed40a50bb0304de5c7d76f5d6732f29fb48c69f4635109ad27cf17c24536f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.178 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[490dc93b-5a6b-46f8-beba-f4a0b253a306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.179 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6f23c8c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:47 compute-0 kernel: tapd6f23c8c-90: left promiscuous mode
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.192 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.200 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.203 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a05a4d-0531-43b0-aee2-ced1ea5a60b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.217 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[12a9f2d0-dd71-4642-b4f0-ecdc604cdcd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.220 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[06dae7c9-1b6b-4e84-bfa3-073d2bdb7271]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.240 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc8e325-b25c-4719-991c-a3fe9e7cfb1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 523626, 'reachable_time': 25870, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252579, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.251 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.252 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[e3006b72-d762-4195-9fbf-52dce677927d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 systemd[1]: run-netns-ovnmeta\x2dd6f23c8c\x2d9266\x2d4c49\x2dbc94\x2d0b9f021c07c2.mount: Deactivated successfully.
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.258 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.259 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.335 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.337 189391 DEBUG nova.objects.instance [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.357 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.439 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.441 189391 DEBUG nova.virt.disk.api [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Checking if we can resize image /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.441 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.524 189391 DEBUG oslo_concurrency.processutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.525 189391 DEBUG nova.virt.disk.api [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Cannot resize image /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.525 189391 DEBUG nova.objects.instance [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'migration_context' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.557 189391 DEBUG nova.virt.libvirt.vif [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-317216903',display_name='tempest-ServerActionsTestJSON-server-317216903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-317216903',id=11,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALDEq66uSnbDCnaPr9NW6WSucskLbrov7y7Lw8g6HLIB9MX0FvV85vzt5NxWgQHUlHzOWK54yVo80owjUx7VTSNbmpWR1rSDduj9dcSmqSox75C4uo2VseotetFpoaEEg==',key_name='tempest-keypair-1149430954',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:42:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='b5cd62a5ad724aed83d939e3ba6d7fd7',ramdisk_id='',reservation_id='r-a5ssvw5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1783347258',owner_user_name='tempest-ServerActionsTestJSON-1783347258-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:43:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3753fb1a520b4e088ce6979db5ae3773',uuid=2b8e8c61-3efb-436e-87b5-35ac9fe60d69,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.558 189391 DEBUG nova.network.os_vif_util [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converting VIF {"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.559 189391 DEBUG nova.network.os_vif_util [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.559 189391 DEBUG os_vif [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.560 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.561 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.562 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.564 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.564 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap798557c8-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.565 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap798557c8-33, col_values=(('external_ids', {'iface-id': '798557c8-33b8-48fa-ba80-092115a6af38', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:6c:8b', 'vm-uuid': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.567 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 NetworkManager[56227]: <info>  [1764200627.5682] manager: (tap798557c8-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.570 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.577 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.578 189391 INFO os_vif [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33')#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.637 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:47 compute-0 kernel: tap798557c8-33: entered promiscuous mode
Nov 26 23:43:47 compute-0 systemd-udevd[252494]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:43:47 compute-0 ovn_controller[97697]: 2025-11-26T23:43:47Z|00193|binding|INFO|Claiming lport 798557c8-33b8-48fa-ba80-092115a6af38 for this chassis.
Nov 26 23:43:47 compute-0 ovn_controller[97697]: 2025-11-26T23:43:47Z|00194|binding|INFO|798557c8-33b8-48fa-ba80-092115a6af38: Claiming fa:16:3e:56:6c:8b 10.100.0.6
Nov 26 23:43:47 compute-0 NetworkManager[56227]: <info>  [1764200627.6609] manager: (tap798557c8-33): new Tun device (/org/freedesktop/NetworkManager/Devices/67)
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.665 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 ovn_controller[97697]: 2025-11-26T23:43:47Z|00195|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 ovn-installed in OVS
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.675 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.678 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 NetworkManager[56227]: <info>  [1764200627.6833] device (tap798557c8-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:43:47 compute-0 NetworkManager[56227]: <info>  [1764200627.6884] device (tap798557c8-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:43:47 compute-0 ovn_controller[97697]: 2025-11-26T23:43:47Z|00196|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 up in Southbound
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.700 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:6c:8b 10.100.0.6'], port_security=['fa:16:3e:56:6c:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '4dbe9fb4-ed7b-48b4-a9c5-2b96bb554e51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0599c7c-1f2c-4f1e-9216-c20a57ddeefa, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=798557c8-33b8-48fa-ba80-092115a6af38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.701 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 798557c8-33b8-48fa-ba80-092115a6af38 in datapath d6f23c8c-9266-4c49-bc94-0b9f021c07c2 bound to our chassis#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.703 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d6f23c8c-9266-4c49-bc94-0b9f021c07c2#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.714 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[7599bbbc-fd47-4108-b62d-0ddcabc4b005]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.715 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd6f23c8c-91 in ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.716 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd6f23c8c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.716 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6e2f56-b5e9-4ca4-872c-612f4b9d5839]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.717 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bf75498d-8526-4e3b-9bc8-af07e881bbae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 systemd-machined[155674]: New machine qemu-14-instance-0000000b.
Nov 26 23:43:47 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000b.
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.732 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[cdca12e5-f2cb-4440-94f7-1f5881bd0c3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.755 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5e864263-a7a5-41ed-92d3-9a6e00fb1975]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.785 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c55c2f-1623-4195-a9e5-9a35bac3799e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.792 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[d176c9bf-a286-4500-a627-ca0438591c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 NetworkManager[56227]: <info>  [1764200627.7935] manager: (tapd6f23c8c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/68)
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.827 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1abc86cb-4f74-4c32-b86c-d54fc892e45d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.831 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[48c7cc02-8e38-4686-b6b7-b5bcec537428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 NetworkManager[56227]: <info>  [1764200627.8527] device (tapd6f23c8c-90): carrier: link connected
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.857 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[61b94691-389c-43e0-b9dc-d255516b9e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.876 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4affb333-0001-49c4-abe0-d637d9c747f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6f23c8c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:31:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531050, 'reachable_time': 26558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252659, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.894 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[20fe4e8c-961b-4b65-8a48-66d9afc6d58f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:313b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 531050, 'tstamp': 531050}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252660, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.913 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[cdef217f-4559-4416-b95c-551df2b08d97]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd6f23c8c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:31:3b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 176, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531050, 'reachable_time': 26558, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252661, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:47 compute-0 nova_compute[189387]: 2025-11-26 23:43:47.924 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:47 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:47.954 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[596c925f-9410-4bfa-815b-7a247b8f65a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.022 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[2203ba24-5a8f-4da0-b24b-c1ebc01c2e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.024 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6f23c8c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.024 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.025 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6f23c8c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:48 compute-0 NetworkManager[56227]: <info>  [1764200628.0273] manager: (tapd6f23c8c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Nov 26 23:43:48 compute-0 kernel: tapd6f23c8c-90: entered promiscuous mode
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.026 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.045 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd6f23c8c-90, col_values=(('external_ids', {'iface-id': '7b0be577-69f9-4df8-992b-e7c104217e56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:48 compute-0 ovn_controller[97697]: 2025-11-26T23:43:48Z|00197|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.076 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.077 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.078 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[9e586eca-8aad-4145-88b7-11db65b028b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.079 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-d6f23c8c-9266-4c49-bc94-0b9f021c07c2
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.pid.haproxy
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID d6f23c8c-9266-4c49-bc94-0b9f021c07c2
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:43:48 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:48.079 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'env', 'PROCESS_TAG=haproxy-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d6f23c8c-9266-4c49-bc94-0b9f021c07c2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.444 189391 DEBUG nova.virt.libvirt.host [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Removed pending event for 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.444 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200628.4436142, 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.445 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.446 189391 DEBUG nova.compute.manager [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.452 189391 INFO nova.virt.libvirt.driver [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance rebooted successfully.#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.452 189391 DEBUG nova.compute.manager [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.466 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.471 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.503 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.504 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200628.4483936, 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.504 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] VM Started (Lifecycle Event)#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.519 189391 DEBUG oslo_concurrency.lockutils [None req-779bbf06-5d28-4674-a328-763ccc150424 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.412s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.526 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.533 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:43:48 compute-0 podman[252700]: 2025-11-26 23:43:48.559134145 +0000 UTC m=+0.069816803 container create cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:43:48 compute-0 systemd[1]: Started libpod-conmon-cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8.scope.
Nov 26 23:43:48 compute-0 podman[252700]: 2025-11-26 23:43:48.525144664 +0000 UTC m=+0.035827372 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:43:48 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:43:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5863185391fa75e8ff9e3d95087408279c165c8ade7ab44168c6c37851dd6ab/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:43:48 compute-0 podman[252700]: 2025-11-26 23:43:48.663109512 +0000 UTC m=+0.173792170 container init cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 26 23:43:48 compute-0 podman[252700]: 2025-11-26 23:43:48.673188737 +0000 UTC m=+0.183871395 container start cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:43:48 compute-0 ovn_controller[97697]: 2025-11-26T23:43:48Z|00198|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:43:48 compute-0 ovn_controller[97697]: 2025-11-26T23:43:48Z|00199|binding|INFO|Releasing lport 9bcac48d-895a-4cd4-ba63-78258e9255b2 from this chassis (sb_readonly=0)
Nov 26 23:43:48 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[252715]: [NOTICE]   (252719) : New worker (252721) forked
Nov 26 23:43:48 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[252715]: [NOTICE]   (252719) : Loading success.
Nov 26 23:43:48 compute-0 nova_compute[189387]: 2025-11-26 23:43:48.728 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.181 189391 DEBUG nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.181 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.182 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.182 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.182 189391 DEBUG nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.183 189391 WARNING nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.183 189391 DEBUG nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.183 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.183 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.184 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.184 189391 DEBUG nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.184 189391 WARNING nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.184 189391 DEBUG nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.185 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.185 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.185 189391 DEBUG oslo_concurrency.lockutils [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.186 189391 DEBUG nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:49 compute-0 nova_compute[189387]: 2025-11-26 23:43:49.186 189391 WARNING nova.compute.manager [req-f2098288-ffba-48d5-a733-74989c4e84c5 req-f004b9ac-6f4e-4cf9-921c-704691c12d65 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:43:49 compute-0 ovn_controller[97697]: 2025-11-26T23:43:49Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:35:df:c3 10.100.0.7
Nov 26 23:43:49 compute-0 ovn_controller[97697]: 2025-11-26T23:43:49Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:35:df:c3 10.100.0.7
Nov 26 23:43:50 compute-0 nova_compute[189387]: 2025-11-26 23:43:50.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:50 compute-0 nova_compute[189387]: 2025-11-26 23:43:50.669 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:52 compute-0 nova_compute[189387]: 2025-11-26 23:43:52.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:43:52 compute-0 nova_compute[189387]: 2025-11-26 23:43:52.569 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:54 compute-0 nova_compute[189387]: 2025-11-26 23:43:54.464 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200619.4631457, 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:43:54 compute-0 nova_compute[189387]: 2025-11-26 23:43:54.465 189391 INFO nova.compute.manager [-] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:43:54 compute-0 nova_compute[189387]: 2025-11-26 23:43:54.488 189391 DEBUG nova.compute.manager [None req-d831553c-8cf9-4ca3-990d-49a216984fd4 - - - - - -] [instance: 8c6c2d42-56ca-46f9-a12a-54c84adf5dbd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:43:54 compute-0 podman[252732]: 2025-11-26 23:43:54.839925438 +0000 UTC m=+0.127227994 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.512 189391 INFO nova.compute.manager [None req-22c223f1-1ff4-43ce-b794-2744c63dcb15 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Get console output#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.525 239672 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.672 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.831 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.833 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.834 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.835 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.836 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.838 189391 INFO nova.compute.manager [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Terminating instance#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.840 189391 DEBUG nova.compute.manager [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:43:55 compute-0 kernel: tap933bd457-0c (unregistering): left promiscuous mode
Nov 26 23:43:55 compute-0 NetworkManager[56227]: <info>  [1764200635.8721] device (tap933bd457-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.879 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:55 compute-0 ovn_controller[97697]: 2025-11-26T23:43:55Z|00200|binding|INFO|Releasing lport 933bd457-0cc9-4849-a69f-0f02814a844a from this chassis (sb_readonly=0)
Nov 26 23:43:55 compute-0 ovn_controller[97697]: 2025-11-26T23:43:55Z|00201|binding|INFO|Setting lport 933bd457-0cc9-4849-a69f-0f02814a844a down in Southbound
Nov 26 23:43:55 compute-0 ovn_controller[97697]: 2025-11-26T23:43:55Z|00202|binding|INFO|Removing iface tap933bd457-0c ovn-installed in OVS
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.887 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:55.898 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:df:c3 10.100.0.7'], port_security=['fa:16:3e:35:df:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '280c0e48-ae70-40a7-96ca-137efae9ea75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-865b8b48-3753-4a05-b614-ccecb1e87781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41a6ffab20ee4735b3f190a1e087aed2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f82289b5-273e-4d7e-9ac6-24bd2e2ecd7d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5348c531-5047-446f-b828-c2a0486b273b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=933bd457-0cc9-4849-a69f-0f02814a844a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:43:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:55.901 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 933bd457-0cc9-4849-a69f-0f02814a844a in datapath 865b8b48-3753-4a05-b614-ccecb1e87781 unbound from our chassis#033[00m
Nov 26 23:43:55 compute-0 nova_compute[189387]: 2025-11-26 23:43:55.912 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:55.913 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 865b8b48-3753-4a05-b614-ccecb1e87781#033[00m
Nov 26 23:43:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:55.937 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[fea3d3d2-b5dd-4d73-a668-b510a63cd708]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:55 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 26 23:43:55 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 37.132s CPU time.
Nov 26 23:43:55 compute-0 systemd-machined[155674]: Machine qemu-13-instance-0000000d terminated.
Nov 26 23:43:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:55.969 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[d622cd90-02b8-49d4-8fb0-53309183edde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:55.972 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[1c30439a-823a-4865-97b9-4145820a296f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.004 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f1bec149-49be-47f1-831e-b598401c236f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.027 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[0848ffcc-5d55-4250-88f0-4f0e86ca70db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap865b8b48-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:94:36'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520908, 'reachable_time': 41066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252765, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.047 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[605cee63-0ef1-48d6-a2a1-d6f4bc6504f3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap865b8b48-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520919, 'tstamp': 520919}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252766, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap865b8b48-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 520922, 'tstamp': 520922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252766, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.051 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap865b8b48-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.053 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.062 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap865b8b48-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.063 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.063 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.064 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap865b8b48-30, col_values=(('external_ids', {'iface-id': '9bcac48d-895a-4cd4-ba63-78258e9255b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:56 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:43:56.064 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.115 189391 INFO nova.virt.libvirt.driver [-] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Instance destroyed successfully.#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.116 189391 DEBUG nova.objects.instance [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lazy-loading 'resources' on Instance uuid 280c0e48-ae70-40a7-96ca-137efae9ea75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.146 189391 DEBUG nova.virt.libvirt.vif [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:43:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1281481769',display_name='tempest-TestNetworkBasicOps-server-1281481769',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1281481769',id=13,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFKin8/XaNI4u/AYbm+AlTkBab4sekoAfGEYZ1xPAIyDCewt1Z3fL7r22TdbnxwwFN3eMieH8Zlh1I4XbYkvGH8E1RbG0Ttc70Iez5mBk4a8ExcMnExYK+II1qhMImhEbA==',key_name='tempest-TestNetworkBasicOps-1027657392',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:43:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='41a6ffab20ee4735b3f190a1e087aed2',ramdisk_id='',reservation_id='r-mb0zqbim',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1869958511',owner_user_name='tempest-TestNetworkBasicOps-1869958511-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:43:14Z,user_data=None,user_id='6a001028c92e48d0b5914bef72937111',uuid=280c0e48-ae70-40a7-96ca-137efae9ea75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.147 189391 DEBUG nova.network.os_vif_util [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converting VIF {"id": "933bd457-0cc9-4849-a69f-0f02814a844a", "address": "fa:16:3e:35:df:c3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap933bd457-0c", "ovs_interfaceid": "933bd457-0cc9-4849-a69f-0f02814a844a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.148 189391 DEBUG nova.network.os_vif_util [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:35:df:c3,bridge_name='br-int',has_traffic_filtering=True,id=933bd457-0cc9-4849-a69f-0f02814a844a,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933bd457-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.148 189391 DEBUG os_vif [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:df:c3,bridge_name='br-int',has_traffic_filtering=True,id=933bd457-0cc9-4849-a69f-0f02814a844a,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933bd457-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.150 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.150 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap933bd457-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.152 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.154 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.157 189391 INFO os_vif [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:df:c3,bridge_name='br-int',has_traffic_filtering=True,id=933bd457-0cc9-4849-a69f-0f02814a844a,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap933bd457-0c')#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.158 189391 INFO nova.virt.libvirt.driver [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Deleting instance files /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75_del#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.159 189391 INFO nova.virt.libvirt.driver [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Deletion of /var/lib/nova/instances/280c0e48-ae70-40a7-96ca-137efae9ea75_del complete#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.244 189391 INFO nova.compute.manager [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.245 189391 DEBUG oslo.service.loopingcall [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.246 189391 DEBUG nova.compute.manager [-] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.247 189391 DEBUG nova.network.neutron [-] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.258 189391 DEBUG nova.compute.manager [req-9147e9de-b5b3-4399-b836-dc8da4d4b5ce req-a4e97cdb-e55d-4653-a0c9-7a4ce40f2493 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-vif-unplugged-933bd457-0cc9-4849-a69f-0f02814a844a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.259 189391 DEBUG oslo_concurrency.lockutils [req-9147e9de-b5b3-4399-b836-dc8da4d4b5ce req-a4e97cdb-e55d-4653-a0c9-7a4ce40f2493 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.260 189391 DEBUG oslo_concurrency.lockutils [req-9147e9de-b5b3-4399-b836-dc8da4d4b5ce req-a4e97cdb-e55d-4653-a0c9-7a4ce40f2493 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.261 189391 DEBUG oslo_concurrency.lockutils [req-9147e9de-b5b3-4399-b836-dc8da4d4b5ce req-a4e97cdb-e55d-4653-a0c9-7a4ce40f2493 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.262 189391 DEBUG nova.compute.manager [req-9147e9de-b5b3-4399-b836-dc8da4d4b5ce req-a4e97cdb-e55d-4653-a0c9-7a4ce40f2493 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] No waiting events found dispatching network-vif-unplugged-933bd457-0cc9-4849-a69f-0f02814a844a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:56 compute-0 nova_compute[189387]: 2025-11-26 23:43:56.262 189391 DEBUG nova.compute.manager [req-9147e9de-b5b3-4399-b836-dc8da4d4b5ce req-a4e97cdb-e55d-4653-a0c9-7a4ce40f2493 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-vif-unplugged-933bd457-0cc9-4849-a69f-0f02814a844a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:43:57 compute-0 nova_compute[189387]: 2025-11-26 23:43:57.144 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.162 189391 DEBUG nova.network.neutron [-] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.190 189391 INFO nova.compute.manager [-] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Took 1.94 seconds to deallocate network for instance.#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.227 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.228 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.270 189391 DEBUG nova.compute.manager [req-f832642d-d04b-484c-af02-ff82b5269821 req-d3de0eb3-27c1-4947-96de-a2b51449cf77 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-vif-deleted-933bd457-0cc9-4849-a69f-0f02814a844a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.370 189391 DEBUG nova.compute.provider_tree [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.397 189391 DEBUG nova.scheduler.client.report [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.424 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.453 189391 INFO nova.scheduler.client.report [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Deleted allocations for instance 280c0e48-ae70-40a7-96ca-137efae9ea75#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.462 189391 DEBUG nova.compute.manager [req-6d152bbe-919a-4052-aebc-0a1c7c2fe0e5 req-6c4f55d2-8432-4e77-8bfb-6fa9f0462e10 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received event network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.463 189391 DEBUG oslo_concurrency.lockutils [req-6d152bbe-919a-4052-aebc-0a1c7c2fe0e5 req-6c4f55d2-8432-4e77-8bfb-6fa9f0462e10 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.464 189391 DEBUG oslo_concurrency.lockutils [req-6d152bbe-919a-4052-aebc-0a1c7c2fe0e5 req-6c4f55d2-8432-4e77-8bfb-6fa9f0462e10 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.466 189391 DEBUG oslo_concurrency.lockutils [req-6d152bbe-919a-4052-aebc-0a1c7c2fe0e5 req-6c4f55d2-8432-4e77-8bfb-6fa9f0462e10 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.467 189391 DEBUG nova.compute.manager [req-6d152bbe-919a-4052-aebc-0a1c7c2fe0e5 req-6c4f55d2-8432-4e77-8bfb-6fa9f0462e10 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] No waiting events found dispatching network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.468 189391 WARNING nova.compute.manager [req-6d152bbe-919a-4052-aebc-0a1c7c2fe0e5 req-6c4f55d2-8432-4e77-8bfb-6fa9f0462e10 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Received unexpected event network-vif-plugged-933bd457-0cc9-4849-a69f-0f02814a844a for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:43:58 compute-0 nova_compute[189387]: 2025-11-26 23:43:58.535 189391 DEBUG oslo_concurrency.lockutils [None req-eb6a0f4e-eb96-4600-bd1a-d59c7d1ae93a 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "280c0e48-ae70-40a7-96ca-137efae9ea75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:43:58 compute-0 podman[252785]: 2025-11-26 23:43:58.82804345 +0000 UTC m=+0.111578976 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:43:58 compute-0 ovn_controller[97697]: 2025-11-26T23:43:58Z|00203|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:43:58 compute-0 ovn_controller[97697]: 2025-11-26T23:43:58Z|00204|binding|INFO|Releasing lport 9bcac48d-895a-4cd4-ba63-78258e9255b2 from this chassis (sb_readonly=0)
Nov 26 23:43:59 compute-0 nova_compute[189387]: 2025-11-26 23:43:59.028 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:43:59 compute-0 podman[203621]: time="2025-11-26T23:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:43:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 26 23:43:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5276 "" "Go-http-client/1.1"
Nov 26 23:44:00 compute-0 nova_compute[189387]: 2025-11-26 23:44:00.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:00 compute-0 nova_compute[189387]: 2025-11-26 23:44:00.675 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:01.010 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:44:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:01.011 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.019 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.154 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 openstack_network_exporter[205787]: ERROR   23:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:44:01 compute-0 openstack_network_exporter[205787]: ERROR   23:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:44:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:44:01 compute-0 openstack_network_exporter[205787]: ERROR   23:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:44:01 compute-0 openstack_network_exporter[205787]: ERROR   23:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:44:01 compute-0 openstack_network_exporter[205787]: ERROR   23:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:44:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.614 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.616 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.617 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.618 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.619 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.622 189391 INFO nova.compute.manager [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Terminating instance#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.624 189391 DEBUG nova.compute.manager [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:44:01 compute-0 kernel: tapd5e5a27b-25 (unregistering): left promiscuous mode
Nov 26 23:44:01 compute-0 NetworkManager[56227]: <info>  [1764200641.6877] device (tapd5e5a27b-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.690 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 ovn_controller[97697]: 2025-11-26T23:44:01Z|00205|binding|INFO|Releasing lport d5e5a27b-2557-44b9-9b24-392e1a2c33bd from this chassis (sb_readonly=0)
Nov 26 23:44:01 compute-0 ovn_controller[97697]: 2025-11-26T23:44:01Z|00206|binding|INFO|Setting lport d5e5a27b-2557-44b9-9b24-392e1a2c33bd down in Southbound
Nov 26 23:44:01 compute-0 ovn_controller[97697]: 2025-11-26T23:44:01Z|00207|binding|INFO|Removing iface tapd5e5a27b-25 ovn-installed in OVS
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.700 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.738 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 26 23:44:01 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 48.331s CPU time.
Nov 26 23:44:01 compute-0 systemd-machined[155674]: Machine qemu-9-instance-00000009 terminated.
Nov 26 23:44:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:01.794 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:13:e3 10.100.0.14'], port_security=['fa:16:3e:81:13:e3 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'cf0578c2-8c80-4b7e-a866-a753553c6f9e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-865b8b48-3753-4a05-b614-ccecb1e87781', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '41a6ffab20ee4735b3f190a1e087aed2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e207ef1-e39e-4231-9571-b551266f6cc9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5348c531-5047-446f-b828-c2a0486b273b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=d5e5a27b-2557-44b9-9b24-392e1a2c33bd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:44:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:01.796 106595 INFO neutron.agent.ovn.metadata.agent [-] Port d5e5a27b-2557-44b9-9b24-392e1a2c33bd in datapath 865b8b48-3753-4a05-b614-ccecb1e87781 unbound from our chassis#033[00m
Nov 26 23:44:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:01.798 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 865b8b48-3753-4a05-b614-ccecb1e87781, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:44:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:01.799 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2111c5-62b9-4a0c-aee1-9d0303a27d02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:01 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:01.800 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781 namespace which is not needed anymore#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.853 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.865 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.901 189391 INFO nova.virt.libvirt.driver [-] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Instance destroyed successfully.#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.901 189391 DEBUG nova.objects.instance [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lazy-loading 'resources' on Instance uuid cf0578c2-8c80-4b7e-a866-a753553c6f9e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.918 189391 DEBUG nova.virt.libvirt.vif [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-647630909',display_name='tempest-TestNetworkBasicOps-server-647630909',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-647630909',id=9,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKBVGkstngapUY9m82a680mxXz9lnXQYezDKbSNcLxIbEJr7iMwiK+lPpiPQRUyqGO2qKz9xbpOo2CkdLxDv6r6xZvkZysoo9t6UxaWs6cIXf8J/N0PiyT8UZowknUb2CQ==',key_name='tempest-TestNetworkBasicOps-658321597',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:42:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='41a6ffab20ee4735b3f190a1e087aed2',ramdisk_id='',reservation_id='r-v20zfk65',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1869958511',owner_user_name='tempest-TestNetworkBasicOps-1869958511-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:42:10Z,user_data=None,user_id='6a001028c92e48d0b5914bef72937111',uuid=cf0578c2-8c80-4b7e-a866-a753553c6f9e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.919 189391 DEBUG nova.network.os_vif_util [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converting VIF {"id": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "address": "fa:16:3e:81:13:e3", "network": {"id": "865b8b48-3753-4a05-b614-ccecb1e87781", "bridge": "br-int", "label": "tempest-network-smoke--2066791378", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "41a6ffab20ee4735b3f190a1e087aed2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd5e5a27b-25", "ovs_interfaceid": "d5e5a27b-2557-44b9-9b24-392e1a2c33bd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.919 189391 DEBUG nova.network.os_vif_util [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:81:13:e3,bridge_name='br-int',has_traffic_filtering=True,id=d5e5a27b-2557-44b9-9b24-392e1a2c33bd,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e5a27b-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.920 189391 DEBUG os_vif [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:13:e3,bridge_name='br-int',has_traffic_filtering=True,id=d5e5a27b-2557-44b9-9b24-392e1a2c33bd,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e5a27b-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.922 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.923 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd5e5a27b-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.925 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.927 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.931 189391 INFO os_vif [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:81:13:e3,bridge_name='br-int',has_traffic_filtering=True,id=d5e5a27b-2557-44b9-9b24-392e1a2c33bd,network=Network(865b8b48-3753-4a05-b614-ccecb1e87781),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd5e5a27b-25')#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.932 189391 INFO nova.virt.libvirt.driver [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Deleting instance files /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e_del#033[00m
Nov 26 23:44:01 compute-0 nova_compute[189387]: 2025-11-26 23:44:01.933 189391 INFO nova.virt.libvirt.driver [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Deletion of /var/lib/nova/instances/cf0578c2-8c80-4b7e-a866-a753553c6f9e_del complete#033[00m
Nov 26 23:44:02 compute-0 neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781[251034]: [NOTICE]   (251040) : haproxy version is 2.8.14-c23fe91
Nov 26 23:44:02 compute-0 neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781[251034]: [NOTICE]   (251040) : path to executable is /usr/sbin/haproxy
Nov 26 23:44:02 compute-0 neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781[251034]: [WARNING]  (251040) : Exiting Master process...
Nov 26 23:44:02 compute-0 neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781[251034]: [ALERT]    (251040) : Current worker (251042) exited with code 143 (Terminated)
Nov 26 23:44:02 compute-0 neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781[251034]: [WARNING]  (251040) : All workers exited. Exiting... (0)
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.017 189391 INFO nova.compute.manager [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.018 189391 DEBUG oslo.service.loopingcall [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.018 189391 DEBUG nova.compute.manager [-] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.019 189391 DEBUG nova.network.neutron [-] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:44:02 compute-0 systemd[1]: libpod-8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce.scope: Deactivated successfully.
Nov 26 23:44:02 compute-0 podman[252844]: 2025-11-26 23:44:02.027467668 +0000 UTC m=+0.064532037 container died 8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce-userdata-shm.mount: Deactivated successfully.
Nov 26 23:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ea13b9f8b7b03b1eb930a0052dc45a35110715364ba2c8104f9857ad1cf33ad-merged.mount: Deactivated successfully.
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.066 189391 DEBUG nova.compute.manager [req-eef2ac68-c196-4056-aebd-c893924cfc18 req-73f2dc9a-f327-442f-b7e4-14c388399143 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-vif-unplugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.067 189391 DEBUG oslo_concurrency.lockutils [req-eef2ac68-c196-4056-aebd-c893924cfc18 req-73f2dc9a-f327-442f-b7e4-14c388399143 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.068 189391 DEBUG oslo_concurrency.lockutils [req-eef2ac68-c196-4056-aebd-c893924cfc18 req-73f2dc9a-f327-442f-b7e4-14c388399143 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.068 189391 DEBUG oslo_concurrency.lockutils [req-eef2ac68-c196-4056-aebd-c893924cfc18 req-73f2dc9a-f327-442f-b7e4-14c388399143 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.068 189391 DEBUG nova.compute.manager [req-eef2ac68-c196-4056-aebd-c893924cfc18 req-73f2dc9a-f327-442f-b7e4-14c388399143 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] No waiting events found dispatching network-vif-unplugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.069 189391 DEBUG nova.compute.manager [req-eef2ac68-c196-4056-aebd-c893924cfc18 req-73f2dc9a-f327-442f-b7e4-14c388399143 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-vif-unplugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:44:02 compute-0 podman[252844]: 2025-11-26 23:44:02.073692414 +0000 UTC m=+0.110756743 container cleanup 8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:44:02 compute-0 systemd[1]: libpod-conmon-8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce.scope: Deactivated successfully.
Nov 26 23:44:02 compute-0 podman[252872]: 2025-11-26 23:44:02.161120617 +0000 UTC m=+0.053373242 container remove 8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.168 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[67206c3d-25e6-4d4e-b40c-5aaa39149bf3]: (4, ('Wed Nov 26 11:44:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781 (8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce)\n8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce\nWed Nov 26 11:44:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781 (8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce)\n8e4242f61e14d1861afa39389c54aacf8e93d60a618d3cfade3c19b855dc42ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.171 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[3f53b891-f255-48c2-9efc-ea1d2e2f8559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.172 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap865b8b48-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.174 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:02 compute-0 kernel: tap865b8b48-30: left promiscuous mode
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.178 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.184 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[21dcbd47-3b93-42b7-88b0-282a03de7918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.192 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.199 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[6e952e9c-c4f6-4149-bf0a-7abc8551f209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.200 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4f8e58e5-933e-4787-bbfe-e09fcc5d2844]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.217 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc1ffa1-ad3b-41a6-ad64-87aaa86e4162]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 520902, 'reachable_time': 33365, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252887, 'error': None, 'target': 'ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.220 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-865b8b48-3753-4a05-b614-ccecb1e87781 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:44:02 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:02.220 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[e5abd6c2-3e72-4afe-ab21-a5e384ba4ae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d865b8b48\x2d3753\x2d4a05\x2db614\x2dccecb1e87781.mount: Deactivated successfully.
Nov 26 23:44:02 compute-0 nova_compute[189387]: 2025-11-26 23:44:02.788 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.579 189391 DEBUG nova.network.neutron [-] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.602 189391 INFO nova.compute.manager [-] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Took 1.58 seconds to deallocate network for instance.#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.645 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.646 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.733 189391 DEBUG nova.compute.provider_tree [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.753 189391 DEBUG nova.scheduler.client.report [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.784 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.815 189391 INFO nova.scheduler.client.report [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Deleted allocations for instance cf0578c2-8c80-4b7e-a866-a753553c6f9e#033[00m
Nov 26 23:44:03 compute-0 nova_compute[189387]: 2025-11-26 23:44:03.887 189391 DEBUG oslo_concurrency.lockutils [None req-5f720311-af11-4539-99f9-45a41c2dac1e 6a001028c92e48d0b5914bef72937111 41a6ffab20ee4735b3f190a1e087aed2 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.271s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:04 compute-0 nova_compute[189387]: 2025-11-26 23:44:04.195 189391 DEBUG nova.compute.manager [req-3e2cd8d5-0e1f-4ceb-bc20-35b02c759fb5 req-2821f786-78b2-4d56-8fe0-c78903872795 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:04 compute-0 nova_compute[189387]: 2025-11-26 23:44:04.195 189391 DEBUG oslo_concurrency.lockutils [req-3e2cd8d5-0e1f-4ceb-bc20-35b02c759fb5 req-2821f786-78b2-4d56-8fe0-c78903872795 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:04 compute-0 nova_compute[189387]: 2025-11-26 23:44:04.195 189391 DEBUG oslo_concurrency.lockutils [req-3e2cd8d5-0e1f-4ceb-bc20-35b02c759fb5 req-2821f786-78b2-4d56-8fe0-c78903872795 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:04 compute-0 nova_compute[189387]: 2025-11-26 23:44:04.195 189391 DEBUG oslo_concurrency.lockutils [req-3e2cd8d5-0e1f-4ceb-bc20-35b02c759fb5 req-2821f786-78b2-4d56-8fe0-c78903872795 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "cf0578c2-8c80-4b7e-a866-a753553c6f9e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:04 compute-0 nova_compute[189387]: 2025-11-26 23:44:04.195 189391 DEBUG nova.compute.manager [req-3e2cd8d5-0e1f-4ceb-bc20-35b02c759fb5 req-2821f786-78b2-4d56-8fe0-c78903872795 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] No waiting events found dispatching network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:44:04 compute-0 nova_compute[189387]: 2025-11-26 23:44:04.196 189391 WARNING nova.compute.manager [req-3e2cd8d5-0e1f-4ceb-bc20-35b02c759fb5 req-2821f786-78b2-4d56-8fe0-c78903872795 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received unexpected event network-vif-plugged-d5e5a27b-2557-44b9-9b24-392e1a2c33bd for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:44:04 compute-0 nova_compute[189387]: 2025-11-26 23:44:04.196 189391 DEBUG nova.compute.manager [req-3e2cd8d5-0e1f-4ceb-bc20-35b02c759fb5 req-2821f786-78b2-4d56-8fe0-c78903872795 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Received event network-vif-deleted-d5e5a27b-2557-44b9-9b24-392e1a2c33bd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:05 compute-0 nova_compute[189387]: 2025-11-26 23:44:05.680 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:05 compute-0 podman[252888]: 2025-11-26 23:44:05.836709272 +0000 UTC m=+0.119242625 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute)
Nov 26 23:44:06 compute-0 nova_compute[189387]: 2025-11-26 23:44:06.345 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:06 compute-0 nova_compute[189387]: 2025-11-26 23:44:06.926 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:07 compute-0 ovn_controller[97697]: 2025-11-26T23:44:07Z|00208|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:44:07 compute-0 nova_compute[189387]: 2025-11-26 23:44:07.974 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:08 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:08.014 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:09.653 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:09.654 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:09.654 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:10 compute-0 nova_compute[189387]: 2025-11-26 23:44:10.683 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:11 compute-0 nova_compute[189387]: 2025-11-26 23:44:11.113 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200636.1119022, 280c0e48-ae70-40a7-96ca-137efae9ea75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:44:11 compute-0 nova_compute[189387]: 2025-11-26 23:44:11.115 189391 INFO nova.compute.manager [-] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:44:11 compute-0 nova_compute[189387]: 2025-11-26 23:44:11.180 189391 DEBUG nova.compute.manager [None req-48feef81-e22b-41df-9e23-225942bfa03b - - - - - -] [instance: 280c0e48-ae70-40a7-96ca-137efae9ea75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:44:11 compute-0 nova_compute[189387]: 2025-11-26 23:44:11.930 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:14 compute-0 podman[252908]: 2025-11-26 23:44:14.878750504 +0000 UTC m=+0.168555125 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:44:15 compute-0 nova_compute[189387]: 2025-11-26 23:44:15.510 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:15 compute-0 nova_compute[189387]: 2025-11-26 23:44:15.685 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:15 compute-0 podman[252935]: 2025-11-26 23:44:15.819856671 +0000 UTC m=+0.103828395 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:44:15 compute-0 podman[252933]: 2025-11-26 23:44:15.822531523 +0000 UTC m=+0.104899622 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, managed_by=edpm_ansible, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 26 23:44:15 compute-0 podman[252934]: 2025-11-26 23:44:15.845270666 +0000 UTC m=+0.132334734 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:44:15 compute-0 podman[252937]: 2025-11-26 23:44:15.853713358 +0000 UTC m=+0.119205615 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Nov 26 23:44:15 compute-0 podman[252936]: 2025-11-26 23:44:15.869025047 +0000 UTC m=+0.136563491 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:44:15 compute-0 nova_compute[189387]: 2025-11-26 23:44:15.922 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:15 compute-0 nova_compute[189387]: 2025-11-26 23:44:15.923 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:15 compute-0 nova_compute[189387]: 2025-11-26 23:44:15.941 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.020 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.021 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.032 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.033 189391 INFO nova.compute.claims [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.193 189391 DEBUG nova.compute.provider_tree [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.213 189391 DEBUG nova.scheduler.client.report [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.242 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.244 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.313 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.315 189391 DEBUG nova.network.neutron [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.338 189391 INFO nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.360 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.463 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.469 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.470 189391 INFO nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Creating image(s)#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.472 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "/var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.473 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "/var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.475 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "/var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.476 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.477 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.522 189391 DEBUG nova.policy [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '717a3950b66241768222cb5d4ba3291e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.898 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200641.8973193, cf0578c2-8c80-4b7e-a866-a753553c6f9e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.899 189391 INFO nova.compute.manager [-] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.924 189391 DEBUG nova.compute.manager [None req-7cf50a48-9468-4140-91d2-e86c87d93d7b - - - - - -] [instance: cf0578c2-8c80-4b7e-a866-a753553c6f9e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:44:16 compute-0 nova_compute[189387]: 2025-11-26 23:44:16.933 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.027 189391 DEBUG nova.network.neutron [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Successfully created port: a6675240-60ea-47db-9ef6-66080adb5743 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.690 189391 DEBUG nova.network.neutron [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Successfully updated port: a6675240-60ea-47db-9ef6-66080adb5743 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.707 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.708 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.708 189391 DEBUG nova.network.neutron [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.781 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.810 189391 DEBUG nova.compute.manager [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received event network-changed-a6675240-60ea-47db-9ef6-66080adb5743 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.810 189391 DEBUG nova.compute.manager [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Refreshing instance network info cache due to event network-changed-a6675240-60ea-47db-9ef6-66080adb5743. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.810 189391 DEBUG oslo_concurrency.lockutils [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.837 189391 DEBUG nova.network.neutron [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.876 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.part --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.877 189391 DEBUG nova.virt.images [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] aa1a3d84-3b07-42eb-bb8c-755851616ed6 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.878 189391 DEBUG nova.privsep.utils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 26 23:44:17 compute-0 nova_compute[189387]: 2025-11-26 23:44:17.879 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.part /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.144 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.part /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.converted" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.150 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.223 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad.converted --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.224 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.237 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.292 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.293 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.293 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.304 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.360 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.361 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad,backing_fmt=raw /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.406 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad,backing_fmt=raw /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.407 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.408 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.480 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.481 189391 DEBUG nova.virt.disk.api [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Checking if we can resize image /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.482 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.540 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.542 189391 DEBUG nova.virt.disk.api [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Cannot resize image /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.543 189391 DEBUG nova.objects.instance [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lazy-loading 'migration_context' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.567 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.568 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Ensure instance console log exists: /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.569 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.570 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:18 compute-0 nova_compute[189387]: 2025-11-26 23:44:18.570 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.687 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.877 189391 DEBUG nova.network.neutron [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.905 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.905 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Instance network_info: |[{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.906 189391 DEBUG oslo_concurrency.lockutils [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.906 189391 DEBUG nova.network.neutron [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Refreshing network info cache for port a6675240-60ea-47db-9ef6-66080adb5743 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.908 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Start _get_guest_xml network_info=[{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:44:08Z,direct_url=<?>,disk_format='qcow2',id=aa1a3d84-3b07-42eb-bb8c-755851616ed6,min_disk=0,min_ram=0,name='tempest-scenario-img--1845119861',owner='717a3950b66241768222cb5d4ba3291e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:44:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.917 189391 WARNING nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.924 189391 DEBUG nova.virt.libvirt.host [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.925 189391 DEBUG nova.virt.libvirt.host [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.930 189391 DEBUG nova.virt.libvirt.host [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.930 189391 DEBUG nova.virt.libvirt.host [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.931 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.931 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:44:08Z,direct_url=<?>,disk_format='qcow2',id=aa1a3d84-3b07-42eb-bb8c-755851616ed6,min_disk=0,min_ram=0,name='tempest-scenario-img--1845119861',owner='717a3950b66241768222cb5d4ba3291e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:44:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.932 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.932 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.932 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.932 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.933 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.933 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.934 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.934 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.934 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.935 189391 DEBUG nova.virt.hardware [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.937 189391 DEBUG nova.virt.libvirt.vif [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di',id=14,image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92e43243-aca7-437e-ae08-bcb42a48e489'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='717a3950b66241768222cb5d4ba3291e',ramdisk_id='',reservation_id='r-bszb5qzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1561175050',owner_user_name='tempest-PrometheusGabbiTest-1561175050-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:44:16Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5715267a6ec9422aa9b3ef4a2956aa77',uuid=0449208f-d12b-40cb-aa71-6f67f687cb6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.938 189391 DEBUG nova.network.os_vif_util [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converting VIF {"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.939 189391 DEBUG nova.network.os_vif_util [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2e:64,bridge_name='br-int',has_traffic_filtering=True,id=a6675240-60ea-47db-9ef6-66080adb5743,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6675240-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.940 189391 DEBUG nova.objects.instance [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lazy-loading 'pci_devices' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.958 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <uuid>0449208f-d12b-40cb-aa71-6f67f687cb6f</uuid>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <name>instance-0000000e</name>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <nova:name>te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di</nova:name>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:44:20</nova:creationTime>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:user uuid="5715267a6ec9422aa9b3ef4a2956aa77">tempest-PrometheusGabbiTest-1561175050-project-member</nova:user>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:project uuid="717a3950b66241768222cb5d4ba3291e">tempest-PrometheusGabbiTest-1561175050</nova:project>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="aa1a3d84-3b07-42eb-bb8c-755851616ed6"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        <nova:port uuid="a6675240-60ea-47db-9ef6-66080adb5743">
Nov 26 23:44:20 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.2.181" ipVersion="4"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <system>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <entry name="serial">0449208f-d12b-40cb-aa71-6f67f687cb6f</entry>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <entry name="uuid">0449208f-d12b-40cb-aa71-6f67f687cb6f</entry>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </system>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <os>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  </os>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <features>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  </features>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.config"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:d6:2e:64"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <target dev="tapa6675240-60"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/console.log" append="off"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <video>
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </video>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:44:20 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:44:20 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:44:20 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:44:20 compute-0 nova_compute[189387]: </domain>
Nov 26 23:44:20 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.958 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Preparing to wait for external event network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.958 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.959 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.959 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.959 189391 DEBUG nova.virt.libvirt.vif [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di',id=14,image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92e43243-aca7-437e-ae08-bcb42a48e489'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='717a3950b66241768222cb5d4ba3291e',ramdisk_id='',reservation_id='r-bszb5qzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1561175050',owner_user_name='tempest-PrometheusGabbiTest-1561175050-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:44:16Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5715267a6ec9422aa9b3ef4a2956aa77',uuid=0449208f-d12b-40cb-aa71-6f67f687cb6f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.960 189391 DEBUG nova.network.os_vif_util [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converting VIF {"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.960 189391 DEBUG nova.network.os_vif_util [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2e:64,bridge_name='br-int',has_traffic_filtering=True,id=a6675240-60ea-47db-9ef6-66080adb5743,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6675240-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.960 189391 DEBUG os_vif [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2e:64,bridge_name='br-int',has_traffic_filtering=True,id=a6675240-60ea-47db-9ef6-66080adb5743,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6675240-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.961 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.961 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.962 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.965 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.965 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6675240-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.965 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6675240-60, col_values=(('external_ids', {'iface-id': 'a6675240-60ea-47db-9ef6-66080adb5743', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d6:2e:64', 'vm-uuid': '0449208f-d12b-40cb-aa71-6f67f687cb6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.967 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:20 compute-0 NetworkManager[56227]: <info>  [1764200660.9684] manager: (tapa6675240-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.971 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.979 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:20 compute-0 nova_compute[189387]: 2025-11-26 23:44:20.982 189391 INFO os_vif [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d6:2e:64,bridge_name='br-int',has_traffic_filtering=True,id=a6675240-60ea-47db-9ef6-66080adb5743,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6675240-60')#033[00m
Nov 26 23:44:21 compute-0 nova_compute[189387]: 2025-11-26 23:44:21.042 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:44:21 compute-0 nova_compute[189387]: 2025-11-26 23:44:21.042 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:44:21 compute-0 nova_compute[189387]: 2025-11-26 23:44:21.043 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] No VIF found with MAC fa:16:3e:d6:2e:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:44:21 compute-0 nova_compute[189387]: 2025-11-26 23:44:21.044 189391 INFO nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Using config drive#033[00m
Nov 26 23:44:22 compute-0 ovn_controller[97697]: 2025-11-26T23:44:22Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:6c:8b 10.100.0.6
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.052 189391 INFO nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Creating config drive at /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.config#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.066 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa8rt6y27 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.112 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.202 189391 DEBUG oslo_concurrency.processutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa8rt6y27" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:23 compute-0 kernel: tapa6675240-60: entered promiscuous mode
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.276 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 NetworkManager[56227]: <info>  [1764200663.2794] manager: (tapa6675240-60): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Nov 26 23:44:23 compute-0 ovn_controller[97697]: 2025-11-26T23:44:23Z|00209|binding|INFO|Claiming lport a6675240-60ea-47db-9ef6-66080adb5743 for this chassis.
Nov 26 23:44:23 compute-0 ovn_controller[97697]: 2025-11-26T23:44:23Z|00210|binding|INFO|a6675240-60ea-47db-9ef6-66080adb5743: Claiming fa:16:3e:d6:2e:64 10.100.2.181
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.286 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.291 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2e:64 10.100.2.181'], port_security=['fa:16:3e:d6:2e:64 10.100.2.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.181/16', 'neutron:device_id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76428163-53d4-4bce-87f0-25b9eaf2a465', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '717a3950b66241768222cb5d4ba3291e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75bb422f-e7bb-41bc-a8be-3077d4c0bdb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3d5333e-350e-4d89-bebd-143dbb215949, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=a6675240-60ea-47db-9ef6-66080adb5743) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.293 106595 INFO neutron.agent.ovn.metadata.agent [-] Port a6675240-60ea-47db-9ef6-66080adb5743 in datapath 76428163-53d4-4bce-87f0-25b9eaf2a465 bound to our chassis#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.296 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76428163-53d4-4bce-87f0-25b9eaf2a465#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.318 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[868f216f-7601-4b05-b1a9-4d09dac2837e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.319 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76428163-51 in ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.322 239757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76428163-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.322 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[cc432942-2e43-4d05-a710-ec0054c1f711]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.325 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.325 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[e502f698-ded2-4f33-bc9a-e8e5d28d67b1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.330 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 ovn_controller[97697]: 2025-11-26T23:44:23Z|00211|binding|INFO|Setting lport a6675240-60ea-47db-9ef6-66080adb5743 ovn-installed in OVS
Nov 26 23:44:23 compute-0 ovn_controller[97697]: 2025-11-26T23:44:23Z|00212|binding|INFO|Setting lport a6675240-60ea-47db-9ef6-66080adb5743 up in Southbound
Nov 26 23:44:23 compute-0 systemd-udevd[253087]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.340 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[ad259009-1884-430d-b87e-7b7b3f062d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 systemd-machined[155674]: New machine qemu-15-instance-0000000e.
Nov 26 23:44:23 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.368 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ee156511-d96c-45a1-862a-2350578eb9fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 NetworkManager[56227]: <info>  [1764200663.3724] device (tapa6675240-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:44:23 compute-0 NetworkManager[56227]: <info>  [1764200663.3733] device (tapa6675240-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.406 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[9ab73b8c-01f2-4cf9-ae93-d96053ca59b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 systemd-udevd[253093]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:44:23 compute-0 NetworkManager[56227]: <info>  [1764200663.4153] manager: (tap76428163-50): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.417 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b2089c-0ec5-40ba-98fe-1e91d169e277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.453 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[4149cbb9-c42b-48a0-8651-b5f03163552f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.457 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4460cb-4be6-4a96-a578-17d7923ce2a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 NetworkManager[56227]: <info>  [1764200663.4798] device (tap76428163-50): carrier: link connected
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.487 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[4b133d1c-b471-453a-8411-bee6af8057e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.509 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ea35aebd-99fd-4f49-a60e-a4af4a6eb8a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76428163-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:fd:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534613, 'reachable_time': 17532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253120, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.527 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4d66e672-7aeb-4f13-9950-e47e0e5ec624]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3d:fdcb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534613, 'tstamp': 534613}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253121, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.542 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[513d71ec-804d-4180-ba9c-732a8646e6f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76428163-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:fd:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534613, 'reachable_time': 17532, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253122, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.572 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7bd036-8f22-4d54-a060-e2b66aea8d28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.643 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[64b97611-c52f-4cda-a998-e44b5b98d0f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.645 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76428163-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.645 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.646 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76428163-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.649 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 NetworkManager[56227]: <info>  [1764200663.6497] manager: (tap76428163-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Nov 26 23:44:23 compute-0 kernel: tap76428163-50: entered promiscuous mode
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.657 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76428163-50, col_values=(('external_ids', {'iface-id': '6eddef7b-a60a-473c-89bf-18f9394dad32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.658 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 ovn_controller[97697]: 2025-11-26T23:44:23Z|00213|binding|INFO|Releasing lport 6eddef7b-a60a-473c-89bf-18f9394dad32 from this chassis (sb_readonly=0)
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.660 106595 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76428163-53d4-4bce-87f0-25b9eaf2a465.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76428163-53d4-4bce-87f0-25b9eaf2a465.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.662 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bb3a387c-a414-4327-82fa-3fb558db1252]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.662 106595 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: global
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    log         /dev/log local0 debug
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    log-tag     haproxy-metadata-proxy-76428163-53d4-4bce-87f0-25b9eaf2a465
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    user        root
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    group       root
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    maxconn     1024
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    pidfile     /var/lib/neutron/external/pids/76428163-53d4-4bce-87f0-25b9eaf2a465.pid.haproxy
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    daemon
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: defaults
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    log global
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    mode http
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    option httplog
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    option dontlognull
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    option http-server-close
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    option forwardfor
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    retries                 3
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    timeout http-request    30s
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    timeout connect         30s
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    timeout client          32s
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    timeout server          32s
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    timeout http-keep-alive 30s
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: listen listener
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    bind 169.254.169.254:80
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    server metadata /var/lib/neutron/metadata_proxy
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]:    http-request add-header X-OVN-Network-ID 76428163-53d4-4bce-87f0-25b9eaf2a465
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 26 23:44:23 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:23.663 106595 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'env', 'PROCESS_TAG=haproxy-76428163-53d4-4bce-87f0-25b9eaf2a465', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76428163-53d4-4bce-87f0-25b9eaf2a465.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.671 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.707 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200663.706531, 0449208f-d12b-40cb-aa71-6f67f687cb6f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.707 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] VM Started (Lifecycle Event)#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.734 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.744 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200663.7076647, 0449208f-d12b-40cb-aa71-6f67f687cb6f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.745 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.768 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.776 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:44:23 compute-0 nova_compute[189387]: 2025-11-26 23:44:23.802 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:44:24 compute-0 podman[253160]: 2025-11-26 23:44:24.100562968 +0000 UTC m=+0.092573256 container create 8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 26 23:44:24 compute-0 podman[253160]: 2025-11-26 23:44:24.048730498 +0000 UTC m=+0.040740786 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 26 23:44:24 compute-0 systemd[1]: Started libpod-conmon-8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69.scope.
Nov 26 23:44:24 compute-0 systemd[1]: Started libcrun container.
Nov 26 23:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9a3057320d1b85a4ac9f9c7ea9c4171a3ba99aea6dba66ced1a05a2fafa3d558/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 26 23:44:24 compute-0 podman[253160]: 2025-11-26 23:44:24.258745018 +0000 UTC m=+0.250755306 container init 8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 26 23:44:24 compute-0 podman[253160]: 2025-11-26 23:44:24.266947603 +0000 UTC m=+0.258957861 container start 8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 26 23:44:24 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [NOTICE]   (253179) : New worker (253181) forked
Nov 26 23:44:24 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [NOTICE]   (253179) : Loading success.
Nov 26 23:44:25 compute-0 nova_compute[189387]: 2025-11-26 23:44:25.690 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:25 compute-0 podman[253190]: 2025-11-26 23:44:25.814789141 +0000 UTC m=+0.108856971 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:44:25 compute-0 nova_compute[189387]: 2025-11-26 23:44:25.968 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:26 compute-0 nova_compute[189387]: 2025-11-26 23:44:26.058 189391 DEBUG nova.network.neutron [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated VIF entry in instance network info cache for port a6675240-60ea-47db-9ef6-66080adb5743. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:44:26 compute-0 nova_compute[189387]: 2025-11-26 23:44:26.059 189391 DEBUG nova.network.neutron [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:44:26 compute-0 nova_compute[189387]: 2025-11-26 23:44:26.075 189391 DEBUG oslo_concurrency.lockutils [req-2ca25d8b-42fa-43ae-916d-ff1dfc5a778d req-9ef080df-7bd5-4363-86d7-81148ffb86a7 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:44:29 compute-0 podman[203621]: time="2025-11-26T23:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:44:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30755 "" "Go-http-client/1.1"
Nov 26 23:44:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5273 "" "Go-http-client/1.1"
Nov 26 23:44:29 compute-0 podman[253210]: 2025-11-26 23:44:29.845572527 +0000 UTC m=+0.132236932 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.392 189391 DEBUG nova.compute.manager [req-969f526f-e5d8-47cf-bdb3-4477cd90b5a3 req-ad355c49-f8a0-4a0f-9bd4-746fd7f1b9d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received event network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.393 189391 DEBUG oslo_concurrency.lockutils [req-969f526f-e5d8-47cf-bdb3-4477cd90b5a3 req-ad355c49-f8a0-4a0f-9bd4-746fd7f1b9d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.395 189391 DEBUG oslo_concurrency.lockutils [req-969f526f-e5d8-47cf-bdb3-4477cd90b5a3 req-ad355c49-f8a0-4a0f-9bd4-746fd7f1b9d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.395 189391 DEBUG oslo_concurrency.lockutils [req-969f526f-e5d8-47cf-bdb3-4477cd90b5a3 req-ad355c49-f8a0-4a0f-9bd4-746fd7f1b9d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.396 189391 DEBUG nova.compute.manager [req-969f526f-e5d8-47cf-bdb3-4477cd90b5a3 req-ad355c49-f8a0-4a0f-9bd4-746fd7f1b9d1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Processing event network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.398 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.406 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200670.404704, 0449208f-d12b-40cb-aa71-6f67f687cb6f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.407 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.410 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.417 189391 INFO nova.virt.libvirt.driver [-] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Instance spawned successfully.#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.419 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.428 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.444 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.452 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.453 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.454 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.456 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.457 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.458 189391 DEBUG nova.virt.libvirt.driver [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.492 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.529 189391 INFO nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Took 14.07 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.530 189391 DEBUG nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.606 189391 INFO nova.compute.manager [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Took 14.62 seconds to build instance.#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.626 189391 DEBUG oslo_concurrency.lockutils [None req-7605d425-c596-44c9-81ba-e8d7036f2db3 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.692 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:30 compute-0 nova_compute[189387]: 2025-11-26 23:44:30.971 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:31 compute-0 nova_compute[189387]: 2025-11-26 23:44:31.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:31 compute-0 nova_compute[189387]: 2025-11-26 23:44:31.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:44:31 compute-0 nova_compute[189387]: 2025-11-26 23:44:31.146 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:44:31 compute-0 openstack_network_exporter[205787]: ERROR   23:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:44:31 compute-0 openstack_network_exporter[205787]: ERROR   23:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:44:31 compute-0 openstack_network_exporter[205787]: ERROR   23:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:44:31 compute-0 openstack_network_exporter[205787]: ERROR   23:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:44:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:44:31 compute-0 openstack_network_exporter[205787]: ERROR   23:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:44:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:44:32 compute-0 nova_compute[189387]: 2025-11-26 23:44:32.590 189391 DEBUG nova.compute.manager [req-4797c778-04a0-4c64-b1a9-34cd9dc95b6d req-94b5e5ee-2aae-4153-8bbf-581413023601 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received event network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:32 compute-0 nova_compute[189387]: 2025-11-26 23:44:32.591 189391 DEBUG oslo_concurrency.lockutils [req-4797c778-04a0-4c64-b1a9-34cd9dc95b6d req-94b5e5ee-2aae-4153-8bbf-581413023601 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:32 compute-0 nova_compute[189387]: 2025-11-26 23:44:32.591 189391 DEBUG oslo_concurrency.lockutils [req-4797c778-04a0-4c64-b1a9-34cd9dc95b6d req-94b5e5ee-2aae-4153-8bbf-581413023601 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:32 compute-0 nova_compute[189387]: 2025-11-26 23:44:32.592 189391 DEBUG oslo_concurrency.lockutils [req-4797c778-04a0-4c64-b1a9-34cd9dc95b6d req-94b5e5ee-2aae-4153-8bbf-581413023601 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:32 compute-0 nova_compute[189387]: 2025-11-26 23:44:32.593 189391 DEBUG nova.compute.manager [req-4797c778-04a0-4c64-b1a9-34cd9dc95b6d req-94b5e5ee-2aae-4153-8bbf-581413023601 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] No waiting events found dispatching network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:44:32 compute-0 nova_compute[189387]: 2025-11-26 23:44:32.594 189391 WARNING nova.compute.manager [req-4797c778-04a0-4c64-b1a9-34cd9dc95b6d req-94b5e5ee-2aae-4153-8bbf-581413023601 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received unexpected event network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 for instance with vm_state active and task_state None.#033[00m
Nov 26 23:44:32 compute-0 nova_compute[189387]: 2025-11-26 23:44:32.970 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:35 compute-0 nova_compute[189387]: 2025-11-26 23:44:35.696 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:35 compute-0 nova_compute[189387]: 2025-11-26 23:44:35.973 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.849 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.851 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce46c7950>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.122 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 0449208f-d12b-40cb-aa71-6f67f687cb6f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.122 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/0449208f-d12b-40cb-aa71-6f67f687cb6f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:44:37 compute-0 nova_compute[189387]: 2025-11-26 23:44:37.200 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:37 compute-0 podman[253235]: 2025-11-26 23:44:37.224467756 +0000 UTC m=+0.084296019 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.968 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Wed, 26 Nov 2025 23:44:37 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-f5e72e28-0c6b-4816-a9fa-434ad6c19453 x-openstack-request-id: req-f5e72e28-0c6b-4816-a9fa-434ad6c19453 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.969 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "0449208f-d12b-40cb-aa71-6f67f687cb6f", "name": "te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di", "status": "ACTIVE", "tenant_id": "717a3950b66241768222cb5d4ba3291e", "user_id": "5715267a6ec9422aa9b3ef4a2956aa77", "metadata": {"metering.server_group": "92e43243-aca7-437e-ae08-bcb42a48e489"}, "hostId": "27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc", "image": {"id": "aa1a3d84-3b07-42eb-bb8c-755851616ed6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa1a3d84-3b07-42eb-bb8c-755851616ed6"}]}, "flavor": {"id": "a4234b2d-ed51-4e17-ad57-a8fb6154451b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a4234b2d-ed51-4e17-ad57-a8fb6154451b"}]}, "created": "2025-11-26T23:44:14Z", "updated": "2025-11-26T23:44:30Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.181", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d6:2e:64"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/0449208f-d12b-40cb-aa71-6f67f687cb6f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/0449208f-d12b-40cb-aa71-6f67f687cb6f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T23:44:30.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.969 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/0449208f-d12b-40cb-aa71-6f67f687cb6f used request id req-f5e72e28-0c6b-4816-a9fa-434ad6c19453 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.972 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'name': 'te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.978 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'name': 'tempest-ServerActionsTestJSON-server-317216903', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '948c6d5b-0d46-4aec-8649-b6cdcb1a5694'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'user_id': '3753fb1a520b4e088ce6979db5ae3773', 'hostId': '739fe0b1504efff72ee8debbf23634c38f9403facb1d407a4ac9b5d1', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.978 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.980 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:44:37.979318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.981 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:44:37.980632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.987 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 0449208f-d12b-40cb-aa71-6f67f687cb6f / tapa6675240-60 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.988 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.992 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.994 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:44:37.994444) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.995 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.996 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:44:37.995920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.996 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.997 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:37.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:44:37.998175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.021 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/cpu volume: 7390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.043 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/cpu volume: 33850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.044 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.044 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.044 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:44:38.044463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.045 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.045 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.046 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.046 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.046 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.046 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:44:38.046403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.060 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.060 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.075 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.075 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.076 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.076 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.077 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.bytes volume: 1278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.077 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.078 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.078 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.bytes.delta volume: 1278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.079 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.080 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.080 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 0449208f-d12b-40cb-aa71-6f67f687cb6f: ceilometer.compute.pollsters.NoVolumeException
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.080 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/memory.usage volume: 42.3359375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.081 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.081 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.082 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.bytes volume: 1341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.082 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:44:38.076796) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:44:38.078328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:44:38.080067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:44:38.081688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.083 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.084 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.084 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.085 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T23:44:38.083344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:44:38.084619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.086 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:44:38.086761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.128 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.129 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.164 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.bytes volume: 32036864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.164 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.165 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.166 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.166 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.166 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.167 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.167 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.167 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.169 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 756519668 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.169 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 3340721 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.169 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.latency volume: 1284547915 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.169 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.latency volume: 82616497 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.170 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.171 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.171 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.171 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.171 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.172 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.173 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.173 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.173 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.173 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.175 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.175 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.175 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.requests volume: 1212 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.175 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.176 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.177 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.177 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.bytes volume: 311296 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:44:38.165965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:44:38.167456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:44:38.168873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:44:38.170889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:44:38.172887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:44:38.174891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:44:38.177055) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.179 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.179 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.180 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.180 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:44:38.179752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.181 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.181 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.182 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.latency volume: 71328069 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.182 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.183 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.183 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.184 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/network.incoming.bytes.delta volume: 1251 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.185 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:44:38.181511) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:44:38.183770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.185 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:44:38.185751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.186 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.186 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.requests volume: 35 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.186 14 DEBUG ceilometer.compute.pollsters [-] 2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.187 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.188 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T23:44:38.188211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.188 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.189 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.190 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:44:38.191 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:44:40 compute-0 nova_compute[189387]: 2025-11-26 23:44:40.698 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:40 compute-0 nova_compute[189387]: 2025-11-26 23:44:40.976 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:42 compute-0 nova_compute[189387]: 2025-11-26 23:44:42.147 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:42 compute-0 nova_compute[189387]: 2025-11-26 23:44:42.147 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:44:42 compute-0 nova_compute[189387]: 2025-11-26 23:44:42.147 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:44:42 compute-0 nova_compute[189387]: 2025-11-26 23:44:42.404 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:44:42 compute-0 nova_compute[189387]: 2025-11-26 23:44:42.405 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:44:42 compute-0 nova_compute[189387]: 2025-11-26 23:44:42.405 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:44:42 compute-0 nova_compute[189387]: 2025-11-26 23:44:42.406 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:44:45 compute-0 ovn_controller[97697]: 2025-11-26T23:44:45Z|00214|binding|INFO|Releasing lport 6eddef7b-a60a-473c-89bf-18f9394dad32 from this chassis (sb_readonly=0)
Nov 26 23:44:45 compute-0 ovn_controller[97697]: 2025-11-26T23:44:45Z|00215|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.326 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.603 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updating instance_info_cache with network_info: [{"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.625 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-2b8e8c61-3efb-436e-87b5-35ac9fe60d69" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.627 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.629 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.630 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.702 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:45 compute-0 podman[253254]: 2025-11-26 23:44:45.863537827 +0000 UTC m=+0.151952971 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:44:45 compute-0 nova_compute[189387]: 2025-11-26 23:44:45.978 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:45 compute-0 podman[253280]: 2025-11-26 23:44:45.996392465 +0000 UTC m=+0.088497264 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 26 23:44:46 compute-0 podman[253287]: 2025-11-26 23:44:46.004584669 +0000 UTC m=+0.081244685 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 26 23:44:46 compute-0 podman[253278]: 2025-11-26 23:44:46.013562455 +0000 UTC m=+0.100362009 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=)
Nov 26 23:44:46 compute-0 podman[253281]: 2025-11-26 23:44:46.021179884 +0000 UTC m=+0.099334371 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Nov 26 23:44:46 compute-0 podman[253279]: 2025-11-26 23:44:46.02943174 +0000 UTC m=+0.125467377 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.146 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.146 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.146 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.147 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.236 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.346 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.348 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.438 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.449 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.531 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.533 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.596 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.945 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.947 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5048MB free_disk=72.27743530273438GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.947 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:47 compute-0 nova_compute[189387]: 2025-11-26 23:44:47.948 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.225 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.226 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.227 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.228 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.342 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.425 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.426 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.443 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.468 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.549 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.570 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.591 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:44:48 compute-0 nova_compute[189387]: 2025-11-26 23:44:48.591 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:49 compute-0 ovn_controller[97697]: 2025-11-26T23:44:49Z|00216|binding|INFO|Releasing lport 6eddef7b-a60a-473c-89bf-18f9394dad32 from this chassis (sb_readonly=0)
Nov 26 23:44:49 compute-0 ovn_controller[97697]: 2025-11-26T23:44:49Z|00217|binding|INFO|Releasing lport 7b0be577-69f9-4df8-992b-e7c104217e56 from this chassis (sb_readonly=0)
Nov 26 23:44:49 compute-0 nova_compute[189387]: 2025-11-26 23:44:49.587 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:49 compute-0 nova_compute[189387]: 2025-11-26 23:44:49.588 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:49 compute-0 nova_compute[189387]: 2025-11-26 23:44:49.631 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:50 compute-0 nova_compute[189387]: 2025-11-26 23:44:50.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:50 compute-0 nova_compute[189387]: 2025-11-26 23:44:50.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:50 compute-0 nova_compute[189387]: 2025-11-26 23:44:50.708 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:50 compute-0 nova_compute[189387]: 2025-11-26 23:44:50.979 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:51 compute-0 nova_compute[189387]: 2025-11-26 23:44:51.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:52 compute-0 nova_compute[189387]: 2025-11-26 23:44:52.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:52 compute-0 nova_compute[189387]: 2025-11-26 23:44:52.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.481 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.484 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.485 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.486 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.487 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.490 189391 INFO nova.compute.manager [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Terminating instance#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.493 189391 DEBUG nova.compute.manager [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:44:54 compute-0 kernel: tap798557c8-33 (unregistering): left promiscuous mode
Nov 26 23:44:54 compute-0 NetworkManager[56227]: <info>  [1764200694.5475] device (tap798557c8-33): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00218|binding|INFO|Releasing lport 798557c8-33b8-48fa-ba80-092115a6af38 from this chassis (sb_readonly=0)
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00219|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 down in Southbound
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.578 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00220|binding|INFO|Removing iface tap798557c8-33 ovn-installed in OVS
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.591 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:54.607 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:6c:8b 10.100.0.6'], port_security=['fa:16:3e:56:6c:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4dbe9fb4-ed7b-48b4-a9c5-2b96bb554e51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0599c7c-1f2c-4f1e-9216-c20a57ddeefa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=798557c8-33b8-48fa-ba80-092115a6af38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:44:54 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:54.610 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 798557c8-33b8-48fa-ba80-092115a6af38 in datapath d6f23c8c-9266-4c49-bc94-0b9f021c07c2 unbound from our chassis#033[00m
Nov 26 23:44:54 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:54.612 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6f23c8c-9266-4c49-bc94-0b9f021c07c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:44:54 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:54.614 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[dc39d044-83d9-4ad4-bb8c-64e565ddaf27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:54 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:54.615 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 namespace which is not needed anymore#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.628 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 26 23:44:54 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000b.scope: Consumed 44.223s CPU time.
Nov 26 23:44:54 compute-0 systemd-machined[155674]: Machine qemu-14-instance-0000000b terminated.
Nov 26 23:44:54 compute-0 kernel: tap798557c8-33: entered promiscuous mode
Nov 26 23:44:54 compute-0 kernel: tap798557c8-33 (unregistering): left promiscuous mode
Nov 26 23:44:54 compute-0 NetworkManager[56227]: <info>  [1764200694.7306] manager: (tap798557c8-33): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00221|binding|INFO|Claiming lport 798557c8-33b8-48fa-ba80-092115a6af38 for this chassis.
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.738 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00222|binding|INFO|798557c8-33b8-48fa-ba80-092115a6af38: Claiming fa:16:3e:56:6c:8b 10.100.0.6
Nov 26 23:44:54 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:54.757 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:6c:8b 10.100.0.6'], port_security=['fa:16:3e:56:6c:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4dbe9fb4-ed7b-48b4-a9c5-2b96bb554e51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0599c7c-1f2c-4f1e-9216-c20a57ddeefa, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=798557c8-33b8-48fa-ba80-092115a6af38) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00223|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 ovn-installed in OVS
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00224|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 up in Southbound
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00225|binding|INFO|Releasing lport 798557c8-33b8-48fa-ba80-092115a6af38 from this chassis (sb_readonly=1)
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00226|if_status|INFO|Dropped 2 log messages in last 109 seconds (most recently, 109 seconds ago) due to excessive rate
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00227|if_status|INFO|Not setting lport 798557c8-33b8-48fa-ba80-092115a6af38 down as sb is readonly
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.768 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.771 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00228|binding|INFO|Releasing lport 798557c8-33b8-48fa-ba80-092115a6af38 from this chassis (sb_readonly=0)
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00229|binding|INFO|Removing iface tap798557c8-33 ovn-installed in OVS
Nov 26 23:44:54 compute-0 ovn_controller[97697]: 2025-11-26T23:44:54Z|00230|binding|INFO|Setting lport 798557c8-33b8-48fa-ba80-092115a6af38 down in Southbound
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.774 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:54.782 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:6c:8b 10.100.0.6'], port_security=['fa:16:3e:56:6c:8b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '2b8e8c61-3efb-436e-87b5-35ac9fe60d69', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b5cd62a5ad724aed83d939e3ba6d7fd7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '4dbe9fb4-ed7b-48b4-a9c5-2b96bb554e51', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0599c7c-1f2c-4f1e-9216-c20a57ddeefa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=798557c8-33b8-48fa-ba80-092115a6af38) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.790 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.796 189391 INFO nova.virt.libvirt.driver [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Instance destroyed successfully.#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.796 189391 DEBUG nova.objects.instance [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lazy-loading 'resources' on Instance uuid 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.808 189391 DEBUG nova.virt.libvirt.vif [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:42:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-317216903',display_name='tempest-ServerActionsTestJSON-server-317216903',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-317216903',id=11,image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBALDEq66uSnbDCnaPr9NW6WSucskLbrov7y7Lw8g6HLIB9MX0FvV85vzt5NxWgQHUlHzOWK54yVo80owjUx7VTSNbmpWR1rSDduj9dcSmqSox75C4uo2VseotetFpoaEEg==',key_name='tempest-keypair-1149430954',keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:42:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b5cd62a5ad724aed83d939e3ba6d7fd7',ramdisk_id='',reservation_id='r-a5ssvw5x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='948c6d5b-0d46-4aec-8649-b6cdcb1a5694',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1783347258',owner_user_name='tempest-ServerActionsTestJSON-1783347258-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:43:48Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='3753fb1a520b4e088ce6979db5ae3773',uuid=2b8e8c61-3efb-436e-87b5-35ac9fe60d69,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.809 189391 DEBUG nova.network.os_vif_util [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converting VIF {"id": "798557c8-33b8-48fa-ba80-092115a6af38", "address": "fa:16:3e:56:6c:8b", "network": {"id": "d6f23c8c-9266-4c49-bc94-0b9f021c07c2", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-495565316-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b5cd62a5ad724aed83d939e3ba6d7fd7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap798557c8-33", "ovs_interfaceid": "798557c8-33b8-48fa-ba80-092115a6af38", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.809 189391 DEBUG nova.network.os_vif_util [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.810 189391 DEBUG os_vif [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.811 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.811 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap798557c8-33, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.815 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.817 189391 INFO os_vif [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:6c:8b,bridge_name='br-int',has_traffic_filtering=True,id=798557c8-33b8-48fa-ba80-092115a6af38,network=Network(d6f23c8c-9266-4c49-bc94-0b9f021c07c2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap798557c8-33')#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.818 189391 INFO nova.virt.libvirt.driver [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Deleting instance files /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69_del#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.819 189391 INFO nova.virt.libvirt.driver [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Deletion of /var/lib/nova/instances/2b8e8c61-3efb-436e-87b5-35ac9fe60d69_del complete#033[00m
Nov 26 23:44:54 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[252715]: [NOTICE]   (252719) : haproxy version is 2.8.14-c23fe91
Nov 26 23:44:54 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[252715]: [NOTICE]   (252719) : path to executable is /usr/sbin/haproxy
Nov 26 23:44:54 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[252715]: [ALERT]    (252719) : Current worker (252721) exited with code 143 (Terminated)
Nov 26 23:44:54 compute-0 neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2[252715]: [WARNING]  (252719) : All workers exited. Exiting... (0)
Nov 26 23:44:54 compute-0 systemd[1]: libpod-cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8.scope: Deactivated successfully.
Nov 26 23:44:54 compute-0 podman[253428]: 2025-11-26 23:44:54.874979763 +0000 UTC m=+0.055738588 container died cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.880 189391 INFO nova.compute.manager [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.881 189391 DEBUG oslo.service.loopingcall [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.881 189391 DEBUG nova.compute.manager [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:44:54 compute-0 nova_compute[189387]: 2025-11-26 23:44:54.882 189391 DEBUG nova.network.neutron [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5863185391fa75e8ff9e3d95087408279c165c8ade7ab44168c6c37851dd6ab-merged.mount: Deactivated successfully.
Nov 26 23:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8-userdata-shm.mount: Deactivated successfully.
Nov 26 23:44:54 compute-0 podman[253428]: 2025-11-26 23:44:54.931360796 +0000 UTC m=+0.112119621 container cleanup cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 23:44:54 compute-0 systemd[1]: libpod-conmon-cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8.scope: Deactivated successfully.
Nov 26 23:44:55 compute-0 podman[253457]: 2025-11-26 23:44:55.029620847 +0000 UTC m=+0.059625954 container remove cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.053 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[bbb74603-ed29-4277-8083-cb973a527cac]: (4, ('Wed Nov 26 11:44:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 (cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8)\ncecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8\nWed Nov 26 11:44:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 (cecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8)\ncecdf5ecfcbab2050bdd6a6494c766021b80032120305b4e9eb794cba34a9aa8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.060 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[16261ce0-62f8-4e22-ad66-f51e313fffc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.062 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6f23c8c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:44:55 compute-0 kernel: tapd6f23c8c-90: left promiscuous mode
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.065 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.068 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.074 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb8ed17-ee05-4142-bf8b-b1b888948e0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.086 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.098 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[8d269a20-735d-4144-bdc4-0bc7b4153b51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.103 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[796d62d5-f2f8-443a-9d06-7477f16f4299]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.123 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[62479289-f19a-432f-89ea-ed2c469db4a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 531043, 'reachable_time': 19084, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253470, 'error': None, 'target': 'ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 systemd[1]: run-netns-ovnmeta\x2dd6f23c8c\x2d9266\x2d4c49\x2dbc94\x2d0b9f021c07c2.mount: Deactivated successfully.
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.127 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d6f23c8c-9266-4c49-bc94-0b9f021c07c2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.128 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[800f8dcd-01b3-4df8-bbc0-f8ed2515325b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.130 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 798557c8-33b8-48fa-ba80-092115a6af38 in datapath d6f23c8c-9266-4c49-bc94-0b9f021c07c2 unbound from our chassis#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.132 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6f23c8c-9266-4c49-bc94-0b9f021c07c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.133 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[00f0043b-ad54-42d5-9dfc-f24e70b2b65e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.134 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 798557c8-33b8-48fa-ba80-092115a6af38 in datapath d6f23c8c-9266-4c49-bc94-0b9f021c07c2 unbound from our chassis#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.135 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d6f23c8c-9266-4c49-bc94-0b9f021c07c2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:44:55 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:44:55.136 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[469438c7-e5cb-4835-b5fb-48d6ffc0fb24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.240 189391 DEBUG nova.compute.manager [req-e27f535f-039a-45ea-bd12-badafda5120a req-211749d0-63ee-43fd-ac57-c785211c890a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-unplugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.240 189391 DEBUG oslo_concurrency.lockutils [req-e27f535f-039a-45ea-bd12-badafda5120a req-211749d0-63ee-43fd-ac57-c785211c890a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.240 189391 DEBUG oslo_concurrency.lockutils [req-e27f535f-039a-45ea-bd12-badafda5120a req-211749d0-63ee-43fd-ac57-c785211c890a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.241 189391 DEBUG oslo_concurrency.lockutils [req-e27f535f-039a-45ea-bd12-badafda5120a req-211749d0-63ee-43fd-ac57-c785211c890a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.241 189391 DEBUG nova.compute.manager [req-e27f535f-039a-45ea-bd12-badafda5120a req-211749d0-63ee-43fd-ac57-c785211c890a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-unplugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.241 189391 DEBUG nova.compute.manager [req-e27f535f-039a-45ea-bd12-badafda5120a req-211749d0-63ee-43fd-ac57-c785211c890a f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-unplugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.714 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.922 189391 DEBUG nova.network.neutron [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.946 189391 INFO nova.compute.manager [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Took 1.06 seconds to deallocate network for instance.#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.990 189391 DEBUG nova.compute.manager [req-68b85263-8174-45a3-bb7d-9268414e2212 req-e9c9faba-ab81-4330-bee9-7543916e59b1 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-deleted-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.993 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:55 compute-0 nova_compute[189387]: 2025-11-26 23:44:55.994 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:56 compute-0 nova_compute[189387]: 2025-11-26 23:44:56.064 189391 DEBUG nova.compute.provider_tree [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:44:56 compute-0 nova_compute[189387]: 2025-11-26 23:44:56.082 189391 DEBUG nova.scheduler.client.report [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:44:56 compute-0 nova_compute[189387]: 2025-11-26 23:44:56.101 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:56 compute-0 nova_compute[189387]: 2025-11-26 23:44:56.124 189391 INFO nova.scheduler.client.report [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Deleted allocations for instance 2b8e8c61-3efb-436e-87b5-35ac9fe60d69#033[00m
Nov 26 23:44:56 compute-0 nova_compute[189387]: 2025-11-26 23:44:56.181 189391 DEBUG oslo_concurrency.lockutils [None req-01f41891-9cff-4403-8456-508aee76feea 3753fb1a520b4e088ce6979db5ae3773 b5cd62a5ad724aed83d939e3ba6d7fd7 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:56 compute-0 podman[253471]: 2025-11-26 23:44:56.821224729 +0000 UTC m=+0.103614427 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.376 189391 DEBUG nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.377 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.378 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.378 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.378 189391 DEBUG nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.379 189391 WARNING nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.379 189391 DEBUG nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.380 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.380 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.381 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.381 189391 DEBUG nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.381 189391 WARNING nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.382 189391 DEBUG nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.382 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.383 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.383 189391 DEBUG oslo_concurrency.lockutils [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "2b8e8c61-3efb-436e-87b5-35ac9fe60d69-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.383 189391 DEBUG nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] No waiting events found dispatching network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:44:57 compute-0 nova_compute[189387]: 2025-11-26 23:44:57.384 189391 WARNING nova.compute.manager [req-b866f543-6364-482b-8eb6-46aa693ccfda req-5220537d-7e01-4fd4-a3ac-112c093e2afb f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Received unexpected event network-vif-plugged-798557c8-33b8-48fa-ba80-092115a6af38 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:44:59 compute-0 podman[203621]: time="2025-11-26T23:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:44:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:44:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 26 23:44:59 compute-0 nova_compute[189387]: 2025-11-26 23:44:59.815 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:00 compute-0 nova_compute[189387]: 2025-11-26 23:45:00.717 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:00 compute-0 podman[253491]: 2025-11-26 23:45:00.807983245 +0000 UTC m=+0.110736834 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:45:00 compute-0 ovn_controller[97697]: 2025-11-26T23:45:00Z|00231|binding|INFO|Releasing lport 6eddef7b-a60a-473c-89bf-18f9394dad32 from this chassis (sb_readonly=0)
Nov 26 23:45:00 compute-0 nova_compute[189387]: 2025-11-26 23:45:00.840 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:01 compute-0 ovn_controller[97697]: 2025-11-26T23:45:01Z|00232|binding|INFO|Releasing lport 6eddef7b-a60a-473c-89bf-18f9394dad32 from this chassis (sb_readonly=0)
Nov 26 23:45:01 compute-0 nova_compute[189387]: 2025-11-26 23:45:01.047 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:01 compute-0 nova_compute[189387]: 2025-11-26 23:45:01.144 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:01 compute-0 nova_compute[189387]: 2025-11-26 23:45:01.144 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:45:01 compute-0 openstack_network_exporter[205787]: ERROR   23:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:45:01 compute-0 openstack_network_exporter[205787]: ERROR   23:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:45:01 compute-0 openstack_network_exporter[205787]: ERROR   23:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:45:01 compute-0 openstack_network_exporter[205787]: ERROR   23:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:45:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:45:01 compute-0 openstack_network_exporter[205787]: ERROR   23:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:45:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:45:04 compute-0 ovn_controller[97697]: 2025-11-26T23:45:04Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d6:2e:64 10.100.2.181
Nov 26 23:45:04 compute-0 ovn_controller[97697]: 2025-11-26T23:45:04Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d6:2e:64 10.100.2.181
Nov 26 23:45:04 compute-0 nova_compute[189387]: 2025-11-26 23:45:04.819 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:05 compute-0 nova_compute[189387]: 2025-11-26 23:45:05.722 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:45:06.623 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:45:06 compute-0 nova_compute[189387]: 2025-11-26 23:45:06.626 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:06 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:45:06.627 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:45:07 compute-0 podman[253526]: 2025-11-26 23:45:07.797848351 +0000 UTC m=+0.088664739 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute)
Nov 26 23:45:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:45:09.631 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:45:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:45:09.654 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:45:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:45:09.655 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:45:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:45:09.657 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:45:09 compute-0 nova_compute[189387]: 2025-11-26 23:45:09.783 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764200694.782043, 2b8e8c61-3efb-436e-87b5-35ac9fe60d69 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:45:09 compute-0 nova_compute[189387]: 2025-11-26 23:45:09.784 189391 INFO nova.compute.manager [-] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:45:09 compute-0 nova_compute[189387]: 2025-11-26 23:45:09.822 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:09 compute-0 nova_compute[189387]: 2025-11-26 23:45:09.985 189391 DEBUG nova.compute.manager [None req-277e5f6c-8326-49d4-84e8-407a73734fcb - - - - - -] [instance: 2b8e8c61-3efb-436e-87b5-35ac9fe60d69] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:45:10 compute-0 nova_compute[189387]: 2025-11-26 23:45:10.726 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:14 compute-0 nova_compute[189387]: 2025-11-26 23:45:14.825 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:15 compute-0 nova_compute[189387]: 2025-11-26 23:45:15.729 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:16 compute-0 podman[253549]: 2025-11-26 23:45:16.855376 +0000 UTC m=+0.117175379 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:45:16 compute-0 podman[253547]: 2025-11-26 23:45:16.866744351 +0000 UTC m=+0.142949984 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, container_name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, name=ubi9)
Nov 26 23:45:16 compute-0 podman[253550]: 2025-11-26 23:45:16.89042147 +0000 UTC m=+0.141988860 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 26 23:45:16 compute-0 podman[253568]: 2025-11-26 23:45:16.891123478 +0000 UTC m=+0.130019121 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Nov 26 23:45:16 compute-0 podman[253556]: 2025-11-26 23:45:16.894163222 +0000 UTC m=+0.136156689 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Nov 26 23:45:16 compute-0 podman[253548]: 2025-11-26 23:45:16.92771228 +0000 UTC m=+0.187889225 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:45:19 compute-0 nova_compute[189387]: 2025-11-26 23:45:19.829 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:20 compute-0 nova_compute[189387]: 2025-11-26 23:45:20.733 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:24 compute-0 nova_compute[189387]: 2025-11-26 23:45:24.832 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:25 compute-0 nova_compute[189387]: 2025-11-26 23:45:25.737 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:27 compute-0 podman[253663]: 2025-11-26 23:45:27.807650535 +0000 UTC m=+0.086905240 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 26 23:45:29 compute-0 podman[203621]: time="2025-11-26T23:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:45:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:45:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Nov 26 23:45:29 compute-0 nova_compute[189387]: 2025-11-26 23:45:29.837 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:30 compute-0 nova_compute[189387]: 2025-11-26 23:45:30.738 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:31 compute-0 openstack_network_exporter[205787]: ERROR   23:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:45:31 compute-0 openstack_network_exporter[205787]: ERROR   23:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:45:31 compute-0 openstack_network_exporter[205787]: ERROR   23:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:45:31 compute-0 openstack_network_exporter[205787]: ERROR   23:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:45:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:45:31 compute-0 openstack_network_exporter[205787]: ERROR   23:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:45:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:45:31 compute-0 ovn_controller[97697]: 2025-11-26T23:45:31Z|00233|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 26 23:45:31 compute-0 podman[253683]: 2025-11-26 23:45:31.815992621 +0000 UTC m=+0.113417096 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:45:34 compute-0 nova_compute[189387]: 2025-11-26 23:45:34.841 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:35 compute-0 nova_compute[189387]: 2025-11-26 23:45:35.740 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:38 compute-0 podman[253706]: 2025-11-26 23:45:38.830312459 +0000 UTC m=+0.110102436 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 26 23:45:39 compute-0 nova_compute[189387]: 2025-11-26 23:45:39.844 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:40 compute-0 nova_compute[189387]: 2025-11-26 23:45:40.743 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:42 compute-0 nova_compute[189387]: 2025-11-26 23:45:42.142 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:42 compute-0 nova_compute[189387]: 2025-11-26 23:45:42.143 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:45:43 compute-0 nova_compute[189387]: 2025-11-26 23:45:43.172 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:45:43 compute-0 nova_compute[189387]: 2025-11-26 23:45:43.172 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:45:43 compute-0 nova_compute[189387]: 2025-11-26 23:45:43.172 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:45:44 compute-0 nova_compute[189387]: 2025-11-26 23:45:44.846 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:45 compute-0 nova_compute[189387]: 2025-11-26 23:45:45.746 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:46 compute-0 nova_compute[189387]: 2025-11-26 23:45:46.334 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:45:46 compute-0 nova_compute[189387]: 2025-11-26 23:45:46.358 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:45:46 compute-0 nova_compute[189387]: 2025-11-26 23:45:46.359 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:45:46 compute-0 nova_compute[189387]: 2025-11-26 23:45:46.360 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:46 compute-0 nova_compute[189387]: 2025-11-26 23:45:46.360 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.155 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.156 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.156 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.157 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.255 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:45:47 compute-0 podman[253726]: 2025-11-26 23:45:47.319329072 +0000 UTC m=+0.103619338 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=kepler, name=ubi9, release-0.7.12=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc.)
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.337 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.338 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:45:47 compute-0 podman[253740]: 2025-11-26 23:45:47.34225068 +0000 UTC m=+0.102380414 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:45:47 compute-0 podman[253745]: 2025-11-26 23:45:47.358114354 +0000 UTC m=+0.100779111 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7)
Nov 26 23:45:47 compute-0 podman[253738]: 2025-11-26 23:45:47.360268273 +0000 UTC m=+0.121682153 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:45:47 compute-0 podman[253728]: 2025-11-26 23:45:47.365941738 +0000 UTC m=+0.122554907 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:45:47 compute-0 podman[253727]: 2025-11-26 23:45:47.368712854 +0000 UTC m=+0.150435900 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.406 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.745 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.746 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=72.27799606323242GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.747 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.747 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.837 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.838 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.838 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.891 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:45:47 compute-0 nova_compute[189387]: 2025-11-26 23:45:47.951 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:45:48 compute-0 nova_compute[189387]: 2025-11-26 23:45:48.055 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:45:48 compute-0 nova_compute[189387]: 2025-11-26 23:45:48.056 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.309s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:45:49 compute-0 nova_compute[189387]: 2025-11-26 23:45:49.058 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:49 compute-0 nova_compute[189387]: 2025-11-26 23:45:49.849 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:50 compute-0 nova_compute[189387]: 2025-11-26 23:45:50.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:50 compute-0 nova_compute[189387]: 2025-11-26 23:45:50.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:50 compute-0 nova_compute[189387]: 2025-11-26 23:45:50.747 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:52 compute-0 nova_compute[189387]: 2025-11-26 23:45:52.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:53 compute-0 nova_compute[189387]: 2025-11-26 23:45:53.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:53 compute-0 nova_compute[189387]: 2025-11-26 23:45:53.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:45:54 compute-0 nova_compute[189387]: 2025-11-26 23:45:54.853 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:55 compute-0 nova_compute[189387]: 2025-11-26 23:45:55.750 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:45:58 compute-0 podman[253846]: 2025-11-26 23:45:58.80531761 +0000 UTC m=+0.093262425 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:45:59 compute-0 podman[203621]: time="2025-11-26T23:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:45:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:45:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:45:59 compute-0 nova_compute[189387]: 2025-11-26 23:45:59.856 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:00 compute-0 nova_compute[189387]: 2025-11-26 23:46:00.753 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:01 compute-0 openstack_network_exporter[205787]: ERROR   23:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:46:01 compute-0 openstack_network_exporter[205787]: ERROR   23:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:46:01 compute-0 openstack_network_exporter[205787]: ERROR   23:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:46:01 compute-0 openstack_network_exporter[205787]: ERROR   23:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:46:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:46:01 compute-0 openstack_network_exporter[205787]: ERROR   23:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:46:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:46:02 compute-0 nova_compute[189387]: 2025-11-26 23:46:02.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:02 compute-0 podman[253865]: 2025-11-26 23:46:02.795731695 +0000 UTC m=+0.097186432 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:46:04 compute-0 nova_compute[189387]: 2025-11-26 23:46:04.858 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:05 compute-0 nova_compute[189387]: 2025-11-26 23:46:05.756 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:46:09.655 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:46:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:46:09.657 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:46:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:46:09.658 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:46:09 compute-0 nova_compute[189387]: 2025-11-26 23:46:09.862 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:09 compute-0 podman[253897]: 2025-11-26 23:46:09.873633962 +0000 UTC m=+0.160974609 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 26 23:46:10 compute-0 nova_compute[189387]: 2025-11-26 23:46:10.760 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:14 compute-0 nova_compute[189387]: 2025-11-26 23:46:14.866 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:15 compute-0 nova_compute[189387]: 2025-11-26 23:46:15.764 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:17 compute-0 podman[253916]: 2025-11-26 23:46:17.828998724 +0000 UTC m=+0.119797571 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, config_id=edpm, io.buildah.version=1.29.0, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 26 23:46:17 compute-0 podman[253925]: 2025-11-26 23:46:17.83137623 +0000 UTC m=+0.093186623 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 26 23:46:17 compute-0 podman[253933]: 2025-11-26 23:46:17.866070629 +0000 UTC m=+0.103066552 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:46:17 compute-0 podman[253947]: 2025-11-26 23:46:17.878237462 +0000 UTC m=+0.117020805 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, release=1755695350, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9, distribution-scope=public)
Nov 26 23:46:17 compute-0 podman[253923]: 2025-11-26 23:46:17.881716298 +0000 UTC m=+0.139229003 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:46:17 compute-0 podman[253917]: 2025-11-26 23:46:17.884885025 +0000 UTC m=+0.160109616 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:46:19 compute-0 nova_compute[189387]: 2025-11-26 23:46:19.871 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:20 compute-0 nova_compute[189387]: 2025-11-26 23:46:20.769 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:24 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 23:46:24 compute-0 nova_compute[189387]: 2025-11-26 23:46:24.874 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:25 compute-0 nova_compute[189387]: 2025-11-26 23:46:25.776 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:29 compute-0 podman[203621]: time="2025-11-26T23:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:46:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:46:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4811 "" "Go-http-client/1.1"
Nov 26 23:46:29 compute-0 podman[254035]: 2025-11-26 23:46:29.856029346 +0000 UTC m=+0.135172582 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:46:29 compute-0 nova_compute[189387]: 2025-11-26 23:46:29.877 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:30 compute-0 nova_compute[189387]: 2025-11-26 23:46:30.776 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:31 compute-0 openstack_network_exporter[205787]: ERROR   23:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:46:31 compute-0 openstack_network_exporter[205787]: ERROR   23:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:46:31 compute-0 openstack_network_exporter[205787]: ERROR   23:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:46:31 compute-0 openstack_network_exporter[205787]: ERROR   23:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:46:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:46:31 compute-0 openstack_network_exporter[205787]: ERROR   23:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:46:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:46:33 compute-0 podman[254054]: 2025-11-26 23:46:33.798368584 +0000 UTC m=+0.092240966 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:46:34 compute-0 nova_compute[189387]: 2025-11-26 23:46:34.879 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:35 compute-0 nova_compute[189387]: 2025-11-26 23:46:35.777 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.849 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.850 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.850 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.860 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'name': 'te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.861 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.861 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.862 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.862 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:46:36.861400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:46:36.862655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.867 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.867 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.868 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.869 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:46:36.868334) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.869 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.870 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:46:36.869404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.870 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:46:36.870901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {'0449208f-d12b-40cb-aa71-6f67f687cb6f': (5479.579749236, [InterfaceStats(name='tapa6675240-60', mac='fa:16:3e:d6:2e:64', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=1352, tx_bytes=1620, rx_packets=9, tx_packets=16, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=1262, tx_bytes_delta=1620)])}, 'inspect_instance': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'disk.root.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets.drop': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'cpu': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.872 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {'0449208f-d12b-40cb-aa71-6f67f687cb6f': (5479.579749236, [InterfaceStats(name='tapa6675240-60', mac='fa:16:3e:d6:2e:64', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=1352, tx_bytes=1620, rx_packets=9, tx_packets=16, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=1262, tx_bytes_delta=1620)])}, 'inspect_instance': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'disk.root.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets.drop': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'cpu': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.872 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {'0449208f-d12b-40cb-aa71-6f67f687cb6f': (5479.579749236, [InterfaceStats(name='tapa6675240-60', mac='fa:16:3e:d6:2e:64', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=1352, tx_bytes=1620, rx_packets=9, tx_packets=16, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=1262, tx_bytes_delta=1620)])}, 'inspect_instance': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'disk.root.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets.drop': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'cpu': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.873 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {'0449208f-d12b-40cb-aa71-6f67f687cb6f': (5479.579749236, [InterfaceStats(name='tapa6675240-60', mac='fa:16:3e:d6:2e:64', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=1352, tx_bytes=1620, rx_packets=9, tx_packets=16, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=1262, tx_bytes_delta=1620)])}, 'inspect_instance': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'disk.root.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets.drop': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'cpu': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.874 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {'0449208f-d12b-40cb-aa71-6f67f687cb6f': (5479.579749236, [InterfaceStats(name='tapa6675240-60', mac='fa:16:3e:d6:2e:64', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=1352, tx_bytes=1620, rx_packets=9, tx_packets=16, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=1262, tx_bytes_delta=1620)])}, 'inspect_instance': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'disk.root.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets.drop': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'cpu': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.874 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce501b5f0>] with cache [{'inspect_vnics': {'0449208f-d12b-40cb-aa71-6f67f687cb6f': (5479.579749236, [InterfaceStats(name='tapa6675240-60', mac='fa:16:3e:d6:2e:64', fref=None, parameters={'interfaceid': None, 'bridge': None}, rx_bytes=1352, tx_bytes=1620, rx_packets=9, tx_packets=16, rx_drop=0, tx_drop=0, rx_errors=0, tx_errors=0, rx_bytes_delta=1262, tx_bytes_delta=1620)])}, 'inspect_instance': {}}], pollster history [{'disk.ephemeral.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'disk.root.size': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'network.incoming.packets.drop': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>], 'cpu': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}], and discovery cache [{'local_instances': [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di>]}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.893 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/cpu volume: 124650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.894 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.894 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.895 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:46:36.894512) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.896 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:46:36.895661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.908 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.909 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:46:36.910469) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:46:36.911552) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/memory.usage volume: 43.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:46:36.912430) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:46:36.913405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:46:36.914359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:46:36.915129) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.952 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 29572096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.952 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:46:36.953504) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 931217066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:46:36.954312) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 58221202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:46:36.955189) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.956 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:46:36.956235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 1062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:46:36.957359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:46:36.958321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:46:36.959377) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.960 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:46:36.960467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 3896122278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:46:36.961339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.961 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes.delta volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.962 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:46:36.962354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 309 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:46:36.963253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.964 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.966 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:46:36.967 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:46:39 compute-0 nova_compute[189387]: 2025-11-26 23:46:39.882 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:40 compute-0 nova_compute[189387]: 2025-11-26 23:46:40.782 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:40 compute-0 podman[254078]: 2025-11-26 23:46:40.815863638 +0000 UTC m=+0.110391482 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.362 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.363 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.364 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.365 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:46:44 compute-0 nova_compute[189387]: 2025-11-26 23:46:44.885 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:45 compute-0 nova_compute[189387]: 2025-11-26 23:46:45.625 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:46:45 compute-0 nova_compute[189387]: 2025-11-26 23:46:45.650 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:46:45 compute-0 nova_compute[189387]: 2025-11-26 23:46:45.651 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:46:45 compute-0 nova_compute[189387]: 2025-11-26 23:46:45.782 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:47 compute-0 nova_compute[189387]: 2025-11-26 23:46:47.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:47 compute-0 nova_compute[189387]: 2025-11-26 23:46:47.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.166 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.167 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.248 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.344 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.345 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.408 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:46:48 compute-0 podman[254109]: 2025-11-26 23:46:48.822699749 +0000 UTC m=+0.097919392 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 26 23:46:48 compute-0 podman[254114]: 2025-11-26 23:46:48.830503173 +0000 UTC m=+0.103636709 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:46:48 compute-0 podman[254102]: 2025-11-26 23:46:48.830724109 +0000 UTC m=+0.126398743 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, managed_by=edpm_ansible, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.838 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.839 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=72.2780876159668GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.839 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.839 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:46:48 compute-0 podman[254104]: 2025-11-26 23:46:48.847269801 +0000 UTC m=+0.131247874 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:46:48 compute-0 podman[254122]: 2025-11-26 23:46:48.851855007 +0000 UTC m=+0.116582143 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., release=1755695350, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9)
Nov 26 23:46:48 compute-0 podman[254103]: 2025-11-26 23:46:48.876593594 +0000 UTC m=+0.162466840 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.946 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.947 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:46:48 compute-0 nova_compute[189387]: 2025-11-26 23:46:48.947 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:46:49 compute-0 nova_compute[189387]: 2025-11-26 23:46:49.002 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:46:49 compute-0 nova_compute[189387]: 2025-11-26 23:46:49.020 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:46:49 compute-0 nova_compute[189387]: 2025-11-26 23:46:49.022 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:46:49 compute-0 nova_compute[189387]: 2025-11-26 23:46:49.022 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:46:49 compute-0 nova_compute[189387]: 2025-11-26 23:46:49.888 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:50 compute-0 nova_compute[189387]: 2025-11-26 23:46:50.785 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:52 compute-0 nova_compute[189387]: 2025-11-26 23:46:52.023 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:52 compute-0 nova_compute[189387]: 2025-11-26 23:46:52.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:53 compute-0 nova_compute[189387]: 2025-11-26 23:46:53.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:54 compute-0 nova_compute[189387]: 2025-11-26 23:46:54.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:54 compute-0 nova_compute[189387]: 2025-11-26 23:46:54.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:46:54 compute-0 nova_compute[189387]: 2025-11-26 23:46:54.891 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:55 compute-0 nova_compute[189387]: 2025-11-26 23:46:55.788 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:46:59 compute-0 podman[203621]: time="2025-11-26T23:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:46:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:46:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Nov 26 23:46:59 compute-0 nova_compute[189387]: 2025-11-26 23:46:59.894 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:00 compute-0 nova_compute[189387]: 2025-11-26 23:47:00.791 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:00 compute-0 podman[254218]: 2025-11-26 23:47:00.847028806 +0000 UTC m=+0.140085496 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 26 23:47:01 compute-0 openstack_network_exporter[205787]: ERROR   23:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:47:01 compute-0 openstack_network_exporter[205787]: ERROR   23:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:47:01 compute-0 openstack_network_exporter[205787]: ERROR   23:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:47:01 compute-0 openstack_network_exporter[205787]: ERROR   23:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:47:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:47:01 compute-0 openstack_network_exporter[205787]: ERROR   23:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:47:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:47:04 compute-0 podman[254238]: 2025-11-26 23:47:04.818477531 +0000 UTC m=+0.114029962 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:47:04 compute-0 nova_compute[189387]: 2025-11-26 23:47:04.896 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:05 compute-0 nova_compute[189387]: 2025-11-26 23:47:05.792 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:09.655 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:09.656 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:09.657 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:09 compute-0 nova_compute[189387]: 2025-11-26 23:47:09.899 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:10 compute-0 nova_compute[189387]: 2025-11-26 23:47:10.794 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:11 compute-0 podman[254262]: 2025-11-26 23:47:11.844687555 +0000 UTC m=+0.139855050 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 23:47:14 compute-0 nova_compute[189387]: 2025-11-26 23:47:14.901 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:15 compute-0 nova_compute[189387]: 2025-11-26 23:47:15.797 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:19 compute-0 podman[254282]: 2025-11-26 23:47:19.816036125 +0000 UTC m=+0.107222037 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, managed_by=edpm_ansible, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., config_id=edpm, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, architecture=x86_64)
Nov 26 23:47:19 compute-0 podman[254289]: 2025-11-26 23:47:19.839852037 +0000 UTC m=+0.114023294 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:47:19 compute-0 podman[254303]: 2025-11-26 23:47:19.842433068 +0000 UTC m=+0.093473181 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Nov 26 23:47:19 compute-0 podman[254290]: 2025-11-26 23:47:19.858301052 +0000 UTC m=+0.126898106 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:47:19 compute-0 podman[254291]: 2025-11-26 23:47:19.879702567 +0000 UTC m=+0.133444664 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 23:47:19 compute-0 podman[254283]: 2025-11-26 23:47:19.90022874 +0000 UTC m=+0.159577300 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:47:19 compute-0 nova_compute[189387]: 2025-11-26 23:47:19.903 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:20 compute-0 nova_compute[189387]: 2025-11-26 23:47:20.801 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:24 compute-0 nova_compute[189387]: 2025-11-26 23:47:24.905 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:25 compute-0 nova_compute[189387]: 2025-11-26 23:47:25.801 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:29 compute-0 podman[203621]: time="2025-11-26T23:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:47:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:47:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Nov 26 23:47:29 compute-0 nova_compute[189387]: 2025-11-26 23:47:29.907 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:30 compute-0 nova_compute[189387]: 2025-11-26 23:47:30.804 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:31 compute-0 openstack_network_exporter[205787]: ERROR   23:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:47:31 compute-0 openstack_network_exporter[205787]: ERROR   23:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:47:31 compute-0 openstack_network_exporter[205787]: ERROR   23:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:47:31 compute-0 openstack_network_exporter[205787]: ERROR   23:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:47:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:47:31 compute-0 openstack_network_exporter[205787]: ERROR   23:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:47:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:47:31 compute-0 podman[254398]: 2025-11-26 23:47:31.82830191 +0000 UTC m=+0.109026867 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:47:34 compute-0 nova_compute[189387]: 2025-11-26 23:47:34.909 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:35 compute-0 podman[254418]: 2025-11-26 23:47:35.768719485 +0000 UTC m=+0.068702482 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:47:35 compute-0 nova_compute[189387]: 2025-11-26 23:47:35.808 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.347 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.349 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.369 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.443 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.444 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.457 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.458 189391 INFO nova.compute.claims [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.604 189391 DEBUG nova.compute.provider_tree [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.621 189391 DEBUG nova.scheduler.client.report [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.647 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.648 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.701 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.702 189391 DEBUG nova.network.neutron [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.721 189391 INFO nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.739 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.857 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.859 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.860 189391 INFO nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Creating image(s)#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.861 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "/var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.861 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "/var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.862 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "/var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.881 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.978 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.979 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.980 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:36 compute-0 nova_compute[189387]: 2025-11-26 23:47:36.996 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.092 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.094 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad,backing_fmt=raw /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.147 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad,backing_fmt=raw /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk 1073741824" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.150 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b6646de0a938e108bf82b01ae34ceaf07f09b8ad" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.151 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.240 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.243 189391 DEBUG nova.virt.disk.api [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Checking if we can resize image /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.244 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.305 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.307 189391 DEBUG nova.virt.disk.api [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Cannot resize image /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.308 189391 DEBUG nova.objects.instance [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lazy-loading 'migration_context' on Instance uuid b7d5e999-38ca-46e8-b572-cc9fad0fc2cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.329 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.330 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Ensure instance console log exists: /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.330 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.331 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.331 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:37 compute-0 nova_compute[189387]: 2025-11-26 23:47:37.335 189391 DEBUG nova.policy [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '717a3950b66241768222cb5d4ba3291e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 26 23:47:38 compute-0 nova_compute[189387]: 2025-11-26 23:47:38.772 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:38 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:38.777 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:47:38 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:38.781 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:47:38 compute-0 nova_compute[189387]: 2025-11-26 23:47:38.886 189391 DEBUG nova.network.neutron [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Successfully created port: 538c994f-bee1-4965-9065-a8ef17e40bea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 26 23:47:39 compute-0 nova_compute[189387]: 2025-11-26 23:47:39.912 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.315 189391 DEBUG nova.network.neutron [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Successfully updated port: 538c994f-bee1-4965-9065-a8ef17e40bea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.348 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.349 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquired lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.349 189391 DEBUG nova.network.neutron [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.565 189391 DEBUG nova.compute.manager [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received event network-changed-538c994f-bee1-4965-9065-a8ef17e40bea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.566 189391 DEBUG nova.compute.manager [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Refreshing instance network info cache due to event network-changed-538c994f-bee1-4965-9065-a8ef17e40bea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.567 189391 DEBUG oslo_concurrency.lockutils [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.809 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:40 compute-0 nova_compute[189387]: 2025-11-26 23:47:40.829 189391 DEBUG nova.network.neutron [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.541 189391 DEBUG nova.network.neutron [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.584 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Releasing lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.585 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Instance network_info: |[{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.586 189391 DEBUG oslo_concurrency.lockutils [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquired lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.587 189391 DEBUG nova.network.neutron [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Refreshing network info cache for port 538c994f-bee1-4965-9065-a8ef17e40bea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.593 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Start _get_guest_xml network_info=[{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:44:08Z,direct_url=<?>,disk_format='qcow2',id=aa1a3d84-3b07-42eb-bb8c-755851616ed6,min_disk=0,min_ram=0,name='tempest-scenario-img--1845119861',owner='717a3950b66241768222cb5d4ba3291e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:44:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'device_type': 'disk', 'guest_format': None, 'encryption_format': None, 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_secret_uuid': None, 'encrypted': False, 'device_name': '/dev/vda', 'image_id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.607 189391 WARNING nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.617 189391 DEBUG nova.virt.libvirt.host [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.618 189391 DEBUG nova.virt.libvirt.host [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.632 189391 DEBUG nova.virt.libvirt.host [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.634 189391 DEBUG nova.virt.libvirt.host [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.635 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.636 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-26T23:40:03Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='a4234b2d-ed51-4e17-ad57-a8fb6154451b',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-26T23:44:08Z,direct_url=<?>,disk_format='qcow2',id=aa1a3d84-3b07-42eb-bb8c-755851616ed6,min_disk=0,min_ram=0,name='tempest-scenario-img--1845119861',owner='717a3950b66241768222cb5d4ba3291e',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-26T23:44:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.637 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.638 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.639 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.640 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.640 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.641 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.642 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.643 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.643 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.644 189391 DEBUG nova.virt.hardware [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.652 189391 DEBUG nova.virt.libvirt.vif [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:47:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf',id=15,image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92e43243-aca7-437e-ae08-bcb42a48e489'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='717a3950b66241768222cb5d4ba3291e',ramdisk_id='',reservation_id='r-hxdxf1qm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1561175050',owner_user_name='tempest-PrometheusGabbiTest-1561175050-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:47:36Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5715267a6ec9422aa9b3ef4a2956aa77',uuid=b7d5e999-38ca-46e8-b572-cc9fad0fc2cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.653 189391 DEBUG nova.network.os_vif_util [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converting VIF {"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.655 189391 DEBUG nova.network.os_vif_util [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:75:6d,bridge_name='br-int',has_traffic_filtering=True,id=538c994f-bee1-4965-9065-a8ef17e40bea,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap538c994f-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.656 189391 DEBUG nova.objects.instance [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lazy-loading 'pci_devices' on Instance uuid b7d5e999-38ca-46e8-b572-cc9fad0fc2cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.675 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] End _get_guest_xml xml=<domain type="kvm">
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <uuid>b7d5e999-38ca-46e8-b572-cc9fad0fc2cc</uuid>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <name>instance-0000000f</name>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <memory>131072</memory>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <vcpu>1</vcpu>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <metadata>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <nova:name>te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf</nova:name>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <nova:creationTime>2025-11-26 23:47:41</nova:creationTime>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <nova:flavor name="m1.nano">
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:memory>128</nova:memory>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:disk>1</nova:disk>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:swap>0</nova:swap>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:ephemeral>0</nova:ephemeral>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:vcpus>1</nova:vcpus>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      </nova:flavor>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <nova:owner>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:user uuid="5715267a6ec9422aa9b3ef4a2956aa77">tempest-PrometheusGabbiTest-1561175050-project-member</nova:user>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:project uuid="717a3950b66241768222cb5d4ba3291e">tempest-PrometheusGabbiTest-1561175050</nova:project>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      </nova:owner>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <nova:root type="image" uuid="aa1a3d84-3b07-42eb-bb8c-755851616ed6"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <nova:ports>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        <nova:port uuid="538c994f-bee1-4965-9065-a8ef17e40bea">
Nov 26 23:47:41 compute-0 nova_compute[189387]:          <nova:ip type="fixed" address="10.100.3.7" ipVersion="4"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:        </nova:port>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      </nova:ports>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </nova:instance>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  </metadata>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <sysinfo type="smbios">
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <system>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <entry name="manufacturer">RDO</entry>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <entry name="product">OpenStack Compute</entry>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <entry name="serial">b7d5e999-38ca-46e8-b572-cc9fad0fc2cc</entry>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <entry name="uuid">b7d5e999-38ca-46e8-b572-cc9fad0fc2cc</entry>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <entry name="family">Virtual Machine</entry>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </system>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  </sysinfo>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <os>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <boot dev="hd"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <smbios mode="sysinfo"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  </os>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <features>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <acpi/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <apic/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <vmcoreinfo/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  </features>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <clock offset="utc">
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <timer name="pit" tickpolicy="delay"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <timer name="hpet" present="no"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  </clock>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <cpu mode="host-model" match="exact">
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <topology sockets="1" cores="1" threads="1"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  </cpu>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  <devices>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <disk type="file" device="disk">
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <target dev="vda" bus="virtio"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <disk type="file" device="cdrom">
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <driver name="qemu" type="raw" cache="none"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <source file="/var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.config"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <target dev="sda" bus="sata"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </disk>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <interface type="ethernet">
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <mac address="fa:16:3e:47:75:6d"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <driver name="vhost" rx_queue_size="512"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <mtu size="1442"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <target dev="tap538c994f-be"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </interface>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <serial type="pty">
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <log file="/var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/console.log" append="off"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </serial>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <video>
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <model type="virtio"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </video>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <input type="tablet" bus="usb"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <rng model="virtio">
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <backend model="random">/dev/urandom</backend>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </rng>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="pci" model="pcie-root-port"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <controller type="usb" index="0"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    <memballoon model="virtio">
Nov 26 23:47:41 compute-0 nova_compute[189387]:      <stats period="10"/>
Nov 26 23:47:41 compute-0 nova_compute[189387]:    </memballoon>
Nov 26 23:47:41 compute-0 nova_compute[189387]:  </devices>
Nov 26 23:47:41 compute-0 nova_compute[189387]: </domain>
Nov 26 23:47:41 compute-0 nova_compute[189387]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.679 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Preparing to wait for external event network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.679 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.680 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.681 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.683 189391 DEBUG nova.virt.libvirt.vif [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-26T23:47:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf',id=15,image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92e43243-aca7-437e-ae08-bcb42a48e489'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='717a3950b66241768222cb5d4ba3291e',ramdisk_id='',reservation_id='r-hxdxf1qm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1561175050',owner_user_name='tempest-PrometheusGabbiTest-1561175050-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-26T23:47:36Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5715267a6ec9422aa9b3ef4a2956aa77',uuid=b7d5e999-38ca-46e8-b572-cc9fad0fc2cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.684 189391 DEBUG nova.network.os_vif_util [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converting VIF {"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.685 189391 DEBUG nova.network.os_vif_util [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:75:6d,bridge_name='br-int',has_traffic_filtering=True,id=538c994f-bee1-4965-9065-a8ef17e40bea,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap538c994f-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.686 189391 DEBUG os_vif [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:75:6d,bridge_name='br-int',has_traffic_filtering=True,id=538c994f-bee1-4965-9065-a8ef17e40bea,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap538c994f-be') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.687 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.688 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.689 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.694 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.695 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap538c994f-be, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.696 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap538c994f-be, col_values=(('external_ids', {'iface-id': '538c994f-bee1-4965-9065-a8ef17e40bea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:75:6d', 'vm-uuid': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.698 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:41 compute-0 NetworkManager[56227]: <info>  [1764200861.6997] manager: (tap538c994f-be): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.700 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.706 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.708 189391 INFO os_vif [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:75:6d,bridge_name='br-int',has_traffic_filtering=True,id=538c994f-bee1-4965-9065-a8ef17e40bea,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap538c994f-be')#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.763 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.765 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.765 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] No VIF found with MAC fa:16:3e:47:75:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 26 23:47:41 compute-0 nova_compute[189387]: 2025-11-26 23:47:41.766 189391 INFO nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Using config drive#033[00m
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.478 189391 INFO nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Creating config drive at /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.config#033[00m
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.483 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp11nxlxgz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.628 189391 DEBUG oslo_concurrency.processutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp11nxlxgz" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:42 compute-0 kernel: tap538c994f-be: entered promiscuous mode
Nov 26 23:47:42 compute-0 NetworkManager[56227]: <info>  [1764200862.7351] manager: (tap538c994f-be): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Nov 26 23:47:42 compute-0 ovn_controller[97697]: 2025-11-26T23:47:42Z|00234|binding|INFO|Claiming lport 538c994f-bee1-4965-9065-a8ef17e40bea for this chassis.
Nov 26 23:47:42 compute-0 ovn_controller[97697]: 2025-11-26T23:47:42Z|00235|binding|INFO|538c994f-bee1-4965-9065-a8ef17e40bea: Claiming fa:16:3e:47:75:6d 10.100.3.7
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.739 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.744 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:75:6d 10.100.3.7'], port_security=['fa:16:3e:47:75:6d 10.100.3.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.7/16', 'neutron:device_id': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76428163-53d4-4bce-87f0-25b9eaf2a465', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '717a3950b66241768222cb5d4ba3291e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '75bb422f-e7bb-41bc-a8be-3077d4c0bdb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3d5333e-350e-4d89-bebd-143dbb215949, chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=538c994f-bee1-4965-9065-a8ef17e40bea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.745 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 538c994f-bee1-4965-9065-a8ef17e40bea in datapath 76428163-53d4-4bce-87f0-25b9eaf2a465 bound to our chassis#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.746 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76428163-53d4-4bce-87f0-25b9eaf2a465#033[00m
Nov 26 23:47:42 compute-0 ovn_controller[97697]: 2025-11-26T23:47:42Z|00236|binding|INFO|Setting lport 538c994f-bee1-4965-9065-a8ef17e40bea ovn-installed in OVS
Nov 26 23:47:42 compute-0 ovn_controller[97697]: 2025-11-26T23:47:42Z|00237|binding|INFO|Setting lport 538c994f-bee1-4965-9065-a8ef17e40bea up in Southbound
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.762 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.766 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.777 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[b265c41d-c4db-49eb-8162-c6b9f0c41269]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:47:42 compute-0 systemd-machined[155674]: New machine qemu-16-instance-0000000f.
Nov 26 23:47:42 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.815 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[f405738e-208b-4ab9-93cd-dabf9203a642]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.821 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[2b60cbc4-0916-4dac-b0da-0555b9d64894]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:47:42 compute-0 systemd-udevd[254497]: Network interface NamePolicy= disabled on kernel command line.
Nov 26 23:47:42 compute-0 NetworkManager[56227]: <info>  [1764200862.8535] device (tap538c994f-be): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 26 23:47:42 compute-0 NetworkManager[56227]: <info>  [1764200862.8576] device (tap538c994f-be): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.857 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[36bf12b7-e564-46d8-abac-87783c0eddb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:47:42 compute-0 podman[254468]: 2025-11-26 23:47:42.862829987 +0000 UTC m=+0.144818296 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.879 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[459bce1a-66e4-482f-8a60-b90e381df93e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76428163-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:fd:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534613, 'reachable_time': 15496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254504, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.897 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8374b7-4e59-4759-bce8-068e0c56ef39]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap76428163-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534626, 'tstamp': 534626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254509, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76428163-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534629, 'tstamp': 534629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254509, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.900 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76428163-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.902 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:42 compute-0 nova_compute[189387]: 2025-11-26 23:47:42.904 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.905 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76428163-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.906 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.907 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76428163-50, col_values=(('external_ids', {'iface-id': '6eddef7b-a60a-473c-89bf-18f9394dad32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:47:42 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:42.908 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.489 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200863.4885883, b7d5e999-38ca-46e8-b572-cc9fad0fc2cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.490 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] VM Started (Lifecycle Event)#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.519 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.527 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200863.4887483, b7d5e999-38ca-46e8-b572-cc9fad0fc2cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.527 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] VM Paused (Lifecycle Event)#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.554 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.561 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.584 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.607 189391 DEBUG nova.compute.manager [req-4bf3a784-e81d-4c8e-aa40-d042bfc62744 req-3dc3299b-3b68-4de2-bf59-f9f5f5655c39 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received event network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.608 189391 DEBUG oslo_concurrency.lockutils [req-4bf3a784-e81d-4c8e-aa40-d042bfc62744 req-3dc3299b-3b68-4de2-bf59-f9f5f5655c39 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.608 189391 DEBUG oslo_concurrency.lockutils [req-4bf3a784-e81d-4c8e-aa40-d042bfc62744 req-3dc3299b-3b68-4de2-bf59-f9f5f5655c39 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.609 189391 DEBUG oslo_concurrency.lockutils [req-4bf3a784-e81d-4c8e-aa40-d042bfc62744 req-3dc3299b-3b68-4de2-bf59-f9f5f5655c39 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.610 189391 DEBUG nova.compute.manager [req-4bf3a784-e81d-4c8e-aa40-d042bfc62744 req-3dc3299b-3b68-4de2-bf59-f9f5f5655c39 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Processing event network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.611 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.615 189391 DEBUG nova.virt.driver [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] Emitting event <LifecycleEvent: 1764200863.6156335, b7d5e999-38ca-46e8-b572-cc9fad0fc2cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.616 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] VM Resumed (Lifecycle Event)#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.619 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.627 189391 INFO nova.virt.libvirt.driver [-] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Instance spawned successfully.#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.628 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.634 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.643 189391 DEBUG nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.657 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.658 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.659 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.660 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.660 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.661 189391 DEBUG nova.virt.libvirt.driver [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.670 189391 INFO nova.compute.manager [None req-d37881d7-8ac4-44ba-8eed-58d23315dcd9 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.728 189391 INFO nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Took 6.87 seconds to spawn the instance on the hypervisor.#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.729 189391 DEBUG nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.800 189391 INFO nova.compute.manager [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Took 7.39 seconds to build instance.#033[00m
Nov 26 23:47:43 compute-0 nova_compute[189387]: 2025-11-26 23:47:43.827 189391 DEBUG oslo_concurrency.lockutils [None req-0e0a2f3e-561b-4cdf-8ea7-2bd65677fa0a 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:44 compute-0 nova_compute[189387]: 2025-11-26 23:47:44.393 189391 DEBUG nova.network.neutron [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updated VIF entry in instance network info cache for port 538c994f-bee1-4965-9065-a8ef17e40bea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 26 23:47:44 compute-0 nova_compute[189387]: 2025-11-26 23:47:44.395 189391 DEBUG nova.network.neutron [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:47:44 compute-0 nova_compute[189387]: 2025-11-26 23:47:44.416 189391 DEBUG oslo_concurrency.lockutils [req-d9399252-9b40-464e-8ddd-d8cc76cff3cf req-8ee08396-177a-4386-82f5-523d6d1213d5 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Releasing lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.128 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.129 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.129 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:47:45 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 26 23:47:45 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.296 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.296 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.297 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.297 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.737 189391 DEBUG nova.compute.manager [req-86289a17-7242-4cd0-b484-d33d4d65912b req-75bb650b-4239-4bf4-ba14-61976029fef9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received event network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.738 189391 DEBUG oslo_concurrency.lockutils [req-86289a17-7242-4cd0-b484-d33d4d65912b req-75bb650b-4239-4bf4-ba14-61976029fef9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.739 189391 DEBUG oslo_concurrency.lockutils [req-86289a17-7242-4cd0-b484-d33d4d65912b req-75bb650b-4239-4bf4-ba14-61976029fef9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.739 189391 DEBUG oslo_concurrency.lockutils [req-86289a17-7242-4cd0-b484-d33d4d65912b req-75bb650b-4239-4bf4-ba14-61976029fef9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.740 189391 DEBUG nova.compute.manager [req-86289a17-7242-4cd0-b484-d33d4d65912b req-75bb650b-4239-4bf4-ba14-61976029fef9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] No waiting events found dispatching network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.740 189391 WARNING nova.compute.manager [req-86289a17-7242-4cd0-b484-d33d4d65912b req-75bb650b-4239-4bf4-ba14-61976029fef9 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received unexpected event network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea for instance with vm_state active and task_state None.#033[00m
Nov 26 23:47:45 compute-0 nova_compute[189387]: 2025-11-26 23:47:45.810 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:46 compute-0 nova_compute[189387]: 2025-11-26 23:47:46.700 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:46 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:47:46.786 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:47:47 compute-0 nova_compute[189387]: 2025-11-26 23:47:47.362 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:47:47 compute-0 nova_compute[189387]: 2025-11-26 23:47:47.385 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:47:47 compute-0 nova_compute[189387]: 2025-11-26 23:47:47.385 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:47:47 compute-0 nova_compute[189387]: 2025-11-26 23:47:47.387 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:47 compute-0 nova_compute[189387]: 2025-11-26 23:47:47.387 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.154 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.154 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.155 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.239 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.319 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.320 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.430 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.443 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.543 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.545 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:47:49 compute-0 nova_compute[189387]: 2025-11-26 23:47:49.654 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.116 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.117 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4966MB free_disk=72.27722930908203GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.118 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.118 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.215 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.215 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.216 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.216 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.291 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.307 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.329 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.329 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:47:50 compute-0 nova_compute[189387]: 2025-11-26 23:47:50.813 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:50 compute-0 podman[254554]: 2025-11-26 23:47:50.837812847 +0000 UTC m=+0.087416635 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:47:50 compute-0 podman[254555]: 2025-11-26 23:47:50.844750597 +0000 UTC m=+0.094346995 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 26 23:47:50 compute-0 podman[254552]: 2025-11-26 23:47:50.859323135 +0000 UTC m=+0.144918408 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, name=ubi9, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 23:47:50 compute-0 podman[254562]: 2025-11-26 23:47:50.85948403 +0000 UTC m=+0.107584916 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal)
Nov 26 23:47:50 compute-0 podman[254561]: 2025-11-26 23:47:50.860317113 +0000 UTC m=+0.119321418 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm)
Nov 26 23:47:50 compute-0 podman[254553]: 2025-11-26 23:47:50.891369883 +0000 UTC m=+0.151362605 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 26 23:47:51 compute-0 nova_compute[189387]: 2025-11-26 23:47:51.331 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:51 compute-0 nova_compute[189387]: 2025-11-26 23:47:51.703 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:53 compute-0 nova_compute[189387]: 2025-11-26 23:47:53.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:53 compute-0 nova_compute[189387]: 2025-11-26 23:47:53.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:54 compute-0 nova_compute[189387]: 2025-11-26 23:47:54.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:54 compute-0 nova_compute[189387]: 2025-11-26 23:47:54.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:55 compute-0 nova_compute[189387]: 2025-11-26 23:47:55.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:47:55 compute-0 nova_compute[189387]: 2025-11-26 23:47:55.816 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:56 compute-0 nova_compute[189387]: 2025-11-26 23:47:56.705 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:47:59 compute-0 podman[203621]: time="2025-11-26T23:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:47:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:47:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Nov 26 23:48:00 compute-0 nova_compute[189387]: 2025-11-26 23:48:00.818 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:01 compute-0 openstack_network_exporter[205787]: ERROR   23:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:48:01 compute-0 openstack_network_exporter[205787]: ERROR   23:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:48:01 compute-0 openstack_network_exporter[205787]: ERROR   23:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:48:01 compute-0 openstack_network_exporter[205787]: ERROR   23:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:48:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:48:01 compute-0 openstack_network_exporter[205787]: ERROR   23:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:48:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:48:01 compute-0 nova_compute[189387]: 2025-11-26 23:48:01.707 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:02 compute-0 podman[254669]: 2025-11-26 23:48:02.815405104 +0000 UTC m=+0.104423330 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 26 23:48:04 compute-0 nova_compute[189387]: 2025-11-26 23:48:04.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:05 compute-0 nova_compute[189387]: 2025-11-26 23:48:05.820 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:06 compute-0 nova_compute[189387]: 2025-11-26 23:48:06.710 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:06 compute-0 podman[254688]: 2025-11-26 23:48:06.774278735 +0000 UTC m=+0.065463563 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:48:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:48:09.656 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:48:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:48:09.658 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:48:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:48:09.659 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:48:10 compute-0 nova_compute[189387]: 2025-11-26 23:48:10.821 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:11 compute-0 nova_compute[189387]: 2025-11-26 23:48:11.714 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:12 compute-0 ovn_controller[97697]: 2025-11-26T23:48:12Z|00238|memory_trim|INFO|Detected inactivity (last active 30012 ms ago): trimming memory
Nov 26 23:48:13 compute-0 podman[254713]: 2025-11-26 23:48:13.858835136 +0000 UTC m=+0.143332836 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm)
Nov 26 23:48:15 compute-0 nova_compute[189387]: 2025-11-26 23:48:15.825 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:16 compute-0 ovn_controller[97697]: 2025-11-26T23:48:16Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:75:6d 10.100.3.7
Nov 26 23:48:16 compute-0 ovn_controller[97697]: 2025-11-26T23:48:16Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:75:6d 10.100.3.7
Nov 26 23:48:16 compute-0 nova_compute[189387]: 2025-11-26 23:48:16.718 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:20 compute-0 nova_compute[189387]: 2025-11-26 23:48:20.827 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:21 compute-0 nova_compute[189387]: 2025-11-26 23:48:21.720 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:21 compute-0 podman[254751]: 2025-11-26 23:48:21.835585744 +0000 UTC m=+0.100039591 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 26 23:48:21 compute-0 podman[254744]: 2025-11-26 23:48:21.836416076 +0000 UTC m=+0.092375861 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:48:21 compute-0 podman[254742]: 2025-11-26 23:48:21.848568859 +0000 UTC m=+0.130306829 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., config_id=edpm, version=9.4, distribution-scope=public, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 26 23:48:21 compute-0 podman[254763]: 2025-11-26 23:48:21.864038152 +0000 UTC m=+0.103374211 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Nov 26 23:48:21 compute-0 podman[254752]: 2025-11-26 23:48:21.88770163 +0000 UTC m=+0.135064058 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3)
Nov 26 23:48:21 compute-0 podman[254743]: 2025-11-26 23:48:21.900539661 +0000 UTC m=+0.164875645 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:48:25 compute-0 nova_compute[189387]: 2025-11-26 23:48:25.830 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:26 compute-0 nova_compute[189387]: 2025-11-26 23:48:26.723 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:29 compute-0 podman[203621]: time="2025-11-26T23:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:48:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:48:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Nov 26 23:48:30 compute-0 nova_compute[189387]: 2025-11-26 23:48:30.832 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:31 compute-0 openstack_network_exporter[205787]: ERROR   23:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:48:31 compute-0 openstack_network_exporter[205787]: ERROR   23:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:48:31 compute-0 openstack_network_exporter[205787]: ERROR   23:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:48:31 compute-0 openstack_network_exporter[205787]: ERROR   23:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:48:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:48:31 compute-0 openstack_network_exporter[205787]: ERROR   23:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:48:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:48:31 compute-0 nova_compute[189387]: 2025-11-26 23:48:31.727 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:33 compute-0 podman[254860]: 2025-11-26 23:48:33.817450818 +0000 UTC m=+0.101232202 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Nov 26 23:48:35 compute-0 nova_compute[189387]: 2025-11-26 23:48:35.835 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:36 compute-0 nova_compute[189387]: 2025-11-26 23:48:36.730 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.850 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.850 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.851 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.862 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 26 23:48:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:36.863 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}caea05af4ff3bb71dca694a18a22cbf449a7452987534b1df6f159c64c91df36" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 26 23:48:37 compute-0 podman[254881]: 2025-11-26 23:48:37.841460652 +0000 UTC m=+0.125575378 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.473 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1830 Content-Type: application/json Date: Wed, 26 Nov 2025 23:48:37 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-b7c13e15-3686-4581-a78c-2084f61a946e x-openstack-request-id: req-b7c13e15-3686-4581-a78c-2084f61a946e _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.473 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc", "name": "te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf", "status": "ACTIVE", "tenant_id": "717a3950b66241768222cb5d4ba3291e", "user_id": "5715267a6ec9422aa9b3ef4a2956aa77", "metadata": {"metering.server_group": "92e43243-aca7-437e-ae08-bcb42a48e489"}, "hostId": "27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc", "image": {"id": "aa1a3d84-3b07-42eb-bb8c-755851616ed6", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/aa1a3d84-3b07-42eb-bb8c-755851616ed6"}]}, "flavor": {"id": "a4234b2d-ed51-4e17-ad57-a8fb6154451b", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/a4234b2d-ed51-4e17-ad57-a8fb6154451b"}]}, "created": "2025-11-26T23:47:34Z", "updated": "2025-11-26T23:47:43Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:47:75:6d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-26T23:47:43.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000f", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.473 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc used request id req-b7c13e15-3686-4581-a78c-2084f61a946e request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.477 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc', 'name': 'te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.482 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'name': 'te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.483 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.485 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.487 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.487 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.490 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.493 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:48:38.485497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:48:38.490255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.498 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b7d5e999-38ca-46e8-b572-cc9fad0fc2cc / tap538c994f-be inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.499 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.506 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.509 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.509 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.510 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.511 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.513 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.514 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:48:38.510964) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.514 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.514 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.515 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.516 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.517 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:48:38.515924) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.517 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.518 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.520 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.521 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.521 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.521 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.524 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:48:38.522649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.560 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/cpu volume: 53070000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.606 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/cpu volume: 245950000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.608 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.608 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.609 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.610 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.611 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.611 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.613 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.614 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.615 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.615 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:48:38.610849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:48:38.617521) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.617 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.639 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.639 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.656 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.657 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.658 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.658 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.658 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.658 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.659 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.659 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.659 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.660 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.661 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.661 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.662 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.662 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.662 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.663 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.663 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.664 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.664 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.665 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.665 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.666 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.666 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.666 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.666 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/memory.usage volume: 43.51171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.667 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/memory.usage volume: 43.328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.668 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.668 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.668 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.669 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.669 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:48:38.659414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.669 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:48:38.663141) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.670 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:48:38.666368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.671 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.671 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.672 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.673 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.673 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.674 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.674 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.674 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:48:38.670202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.675 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.675 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-26T23:48:38.674603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.675 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf>]
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.676 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.676 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.677 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.677 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.677 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.678 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.679 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.679 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.679 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.680 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.680 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.681 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.681 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:48:38.677797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.682 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:48:38.681675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.729 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 30137344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.741 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.795 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 29572096 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.796 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.797 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.797 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.798 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.798 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:48:38.798328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.799 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.800 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:48:38.801217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.801 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.802 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.803 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.804 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.804 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 739295997 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.804 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 89632121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.805 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 931217066 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.806 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 58221202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.806 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.807 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:48:38.803945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.808 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.808 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:48:38.808523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.808 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.808 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.809 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.809 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.810 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.811 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.812 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.812 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.813 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.813 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.813 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:48:38.812178) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.815 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 1090 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.815 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.816 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 1062 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.816 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.817 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.818 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 72822784 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:48:38.815069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.818 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.819 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.819 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.820 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.821 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.821 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.821 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.821 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:48:38.818283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.822 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.822 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.823 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 2677916534 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.823 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.823 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 3896122278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:48:38.821292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.824 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.824 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.825 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.825 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.825 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.826 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.826 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.827 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 305 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.827 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.827 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 309 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:48:38.822882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.828 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.829 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.829 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.829 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf>]
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.830 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.831 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.832 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.833 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.834 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.834 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.834 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:48:38.825434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:48:38.826928) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:38 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:48:38.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-26T23:48:38.829399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:48:40 compute-0 nova_compute[189387]: 2025-11-26 23:48:40.838 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:41 compute-0 nova_compute[189387]: 2025-11-26 23:48:41.733 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:44 compute-0 podman[254903]: 2025-11-26 23:48:44.793872885 +0000 UTC m=+0.104817391 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 26 23:48:45 compute-0 nova_compute[189387]: 2025-11-26 23:48:45.840 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:46 compute-0 nova_compute[189387]: 2025-11-26 23:48:46.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:46 compute-0 nova_compute[189387]: 2025-11-26 23:48:46.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:48:46 compute-0 nova_compute[189387]: 2025-11-26 23:48:46.737 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:47 compute-0 nova_compute[189387]: 2025-11-26 23:48:47.323 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:48:47 compute-0 nova_compute[189387]: 2025-11-26 23:48:47.324 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:48:47 compute-0 nova_compute[189387]: 2025-11-26 23:48:47.324 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.474 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.501 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.502 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.503 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.503 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.504 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.527 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.528 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.529 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.529 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.624 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.705 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.707 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.792 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.809 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.843 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.892 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.894 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:48:50 compute-0 nova_compute[189387]: 2025-11-26 23:48:50.962 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.267 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.268 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4961MB free_disk=72.24824905395508GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.268 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.269 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.359 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.360 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.360 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.361 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.426 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.441 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.443 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.444 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:48:51 compute-0 nova_compute[189387]: 2025-11-26 23:48:51.741 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:52 compute-0 nova_compute[189387]: 2025-11-26 23:48:52.065 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:52 compute-0 podman[254944]: 2025-11-26 23:48:52.821898027 +0000 UTC m=+0.093921262 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 26 23:48:52 compute-0 podman[254943]: 2025-11-26 23:48:52.835848089 +0000 UTC m=+0.099802024 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:48:52 compute-0 podman[254948]: 2025-11-26 23:48:52.85782816 +0000 UTC m=+0.109370445 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 23:48:52 compute-0 podman[254960]: 2025-11-26 23:48:52.861327016 +0000 UTC m=+0.098071896 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm)
Nov 26 23:48:52 compute-0 podman[254935]: 2025-11-26 23:48:52.871103524 +0000 UTC m=+0.166636363 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, version=9.4, architecture=x86_64, io.buildah.version=1.29.0, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:48:52 compute-0 podman[254936]: 2025-11-26 23:48:52.894791183 +0000 UTC m=+0.172491065 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:48:54 compute-0 nova_compute[189387]: 2025-11-26 23:48:54.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:55 compute-0 nova_compute[189387]: 2025-11-26 23:48:55.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:55 compute-0 nova_compute[189387]: 2025-11-26 23:48:55.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:55 compute-0 nova_compute[189387]: 2025-11-26 23:48:55.845 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:56 compute-0 nova_compute[189387]: 2025-11-26 23:48:56.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:56 compute-0 nova_compute[189387]: 2025-11-26 23:48:56.137 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:48:56 compute-0 nova_compute[189387]: 2025-11-26 23:48:56.745 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:48:59 compute-0 podman[203621]: time="2025-11-26T23:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:48:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:48:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4812 "" "Go-http-client/1.1"
Nov 26 23:49:00 compute-0 nova_compute[189387]: 2025-11-26 23:49:00.847 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:01 compute-0 openstack_network_exporter[205787]: ERROR   23:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:49:01 compute-0 openstack_network_exporter[205787]: ERROR   23:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:49:01 compute-0 openstack_network_exporter[205787]: ERROR   23:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:49:01 compute-0 openstack_network_exporter[205787]: ERROR   23:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:49:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:49:01 compute-0 openstack_network_exporter[205787]: ERROR   23:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:49:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:49:01 compute-0 nova_compute[189387]: 2025-11-26 23:49:01.749 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:04 compute-0 podman[255051]: 2025-11-26 23:49:04.810583248 +0000 UTC m=+0.108714963 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:49:05 compute-0 nova_compute[189387]: 2025-11-26 23:49:05.849 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:06 compute-0 nova_compute[189387]: 2025-11-26 23:49:06.753 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:08 compute-0 podman[255071]: 2025-11-26 23:49:08.835942387 +0000 UTC m=+0.115508874 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:49:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:49:09.657 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:49:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:49:09.658 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:49:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:49:09.659 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:49:10 compute-0 nova_compute[189387]: 2025-11-26 23:49:10.853 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:11 compute-0 nova_compute[189387]: 2025-11-26 23:49:11.758 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:15 compute-0 podman[255095]: 2025-11-26 23:49:15.830406936 +0000 UTC m=+0.116104350 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 26 23:49:15 compute-0 nova_compute[189387]: 2025-11-26 23:49:15.856 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.124 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.125 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.125 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.125 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.125 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.126 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.157 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.175 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.175 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Image id aa1a3d84-3b07-42eb-bb8c-755851616ed6 yields fingerprint b6646de0a938e108bf82b01ae34ceaf07f09b8ad _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.175 189391 INFO nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] image aa1a3d84-3b07-42eb-bb8c-755851616ed6 at (/var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad): checking#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.176 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] image aa1a3d84-3b07-42eb-bb8c-755851616ed6 at (/var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.178 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.179 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] 0449208f-d12b-40cb-aa71-6f67f687cb6f is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.179 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] 0449208f-d12b-40cb-aa71-6f67f687cb6f has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.179 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.272 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.274 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f is backed by b6646de0a938e108bf82b01ae34ceaf07f09b8ad _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.275 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.276 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.277 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.350 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.352 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc is backed by b6646de0a938e108bf82b01ae34ceaf07f09b8ad _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.353 189391 WARNING nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.353 189391 WARNING nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.354 189391 WARNING nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.355 189391 INFO nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Active base files: /var/lib/nova/instances/_base/b6646de0a938e108bf82b01ae34ceaf07f09b8ad#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.355 189391 INFO nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Removable base files: /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86 /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.357 189391 INFO nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/88820ed9476b98465b4ed33781797613b42e7ead#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.358 189391 INFO nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/70621b30123d1851a67a3cfd3d5b49a7a1030e86#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.359 189391 INFO nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/4bfc824fda96e5558a690ed70963ecd686d78685#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.359 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.360 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.361 189391 DEBUG nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.361 189391 INFO nova.virt.libvirt.imagecache [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 26 23:49:16 compute-0 nova_compute[189387]: 2025-11-26 23:49:16.763 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:20 compute-0 nova_compute[189387]: 2025-11-26 23:49:20.859 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:21 compute-0 nova_compute[189387]: 2025-11-26 23:49:21.767 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:23 compute-0 podman[255132]: 2025-11-26 23:49:23.819653101 +0000 UTC m=+0.087014745 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:49:23 compute-0 podman[255130]: 2025-11-26 23:49:23.841949416 +0000 UTC m=+0.114066316 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release=1214.1726694543, release-0.7.12=, name=ubi9, config_id=edpm, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., container_name=kepler, maintainer=Red Hat, Inc.)
Nov 26 23:49:23 compute-0 podman[255131]: 2025-11-26 23:49:23.852753314 +0000 UTC m=+0.131483991 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:49:23 compute-0 podman[255151]: 2025-11-26 23:49:23.862890574 +0000 UTC m=+0.099224739 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, release=1755695350, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc.)
Nov 26 23:49:23 compute-0 podman[255138]: 2025-11-26 23:49:23.872134481 +0000 UTC m=+0.115236476 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 23:49:23 compute-0 podman[255150]: 2025-11-26 23:49:23.877932726 +0000 UTC m=+0.126118078 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:49:25 compute-0 nova_compute[189387]: 2025-11-26 23:49:25.861 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:26 compute-0 nova_compute[189387]: 2025-11-26 23:49:26.772 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:29 compute-0 podman[203621]: time="2025-11-26T23:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:49:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:49:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 26 23:49:30 compute-0 nova_compute[189387]: 2025-11-26 23:49:30.865 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:31 compute-0 openstack_network_exporter[205787]: ERROR   23:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:49:31 compute-0 openstack_network_exporter[205787]: ERROR   23:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:49:31 compute-0 openstack_network_exporter[205787]: ERROR   23:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:49:31 compute-0 openstack_network_exporter[205787]: ERROR   23:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:49:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:49:31 compute-0 openstack_network_exporter[205787]: ERROR   23:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:49:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:49:31 compute-0 nova_compute[189387]: 2025-11-26 23:49:31.775 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:35 compute-0 podman[255247]: 2025-11-26 23:49:35.852049538 +0000 UTC m=+0.134850680 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:49:35 compute-0 nova_compute[189387]: 2025-11-26 23:49:35.867 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:36 compute-0 nova_compute[189387]: 2025-11-26 23:49:36.780 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:39 compute-0 nova_compute[189387]: 2025-11-26 23:49:39.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:39 compute-0 nova_compute[189387]: 2025-11-26 23:49:39.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:49:39 compute-0 nova_compute[189387]: 2025-11-26 23:49:39.148 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:49:39 compute-0 podman[255266]: 2025-11-26 23:49:39.8149505 +0000 UTC m=+0.101921112 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:49:40 compute-0 nova_compute[189387]: 2025-11-26 23:49:40.871 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:41 compute-0 nova_compute[189387]: 2025-11-26 23:49:41.785 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:45 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 26 23:49:45 compute-0 nova_compute[189387]: 2025-11-26 23:49:45.871 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:46 compute-0 nova_compute[189387]: 2025-11-26 23:49:46.787 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:46 compute-0 podman[255291]: 2025-11-26 23:49:46.826752022 +0000 UTC m=+0.116250594 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 26 23:49:48 compute-0 nova_compute[189387]: 2025-11-26 23:49:48.149 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:48 compute-0 nova_compute[189387]: 2025-11-26 23:49:48.150 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:49:48 compute-0 nova_compute[189387]: 2025-11-26 23:49:48.150 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:49:48 compute-0 nova_compute[189387]: 2025-11-26 23:49:48.334 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:49:48 compute-0 nova_compute[189387]: 2025-11-26 23:49:48.335 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:49:48 compute-0 nova_compute[189387]: 2025-11-26 23:49:48.336 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:49:48 compute-0 nova_compute[189387]: 2025-11-26 23:49:48.336 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:49:50 compute-0 nova_compute[189387]: 2025-11-26 23:49:50.364 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:49:50 compute-0 nova_compute[189387]: 2025-11-26 23:49:50.388 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:49:50 compute-0 nova_compute[189387]: 2025-11-26 23:49:50.389 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:49:50 compute-0 nova_compute[189387]: 2025-11-26 23:49:50.390 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:50 compute-0 nova_compute[189387]: 2025-11-26 23:49:50.390 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:49:50 compute-0 nova_compute[189387]: 2025-11-26 23:49:50.875 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:51 compute-0 nova_compute[189387]: 2025-11-26 23:49:51.791 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.167 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.168 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.270 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.371 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.374 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.461 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.474 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.541 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.544 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:49:52 compute-0 nova_compute[189387]: 2025-11-26 23:49:52.612 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.038 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.041 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4894MB free_disk=72.24828338623047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.042 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.044 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.339 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.341 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.341 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.341 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.448 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.553 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.554 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.579 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.615 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.694 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.713 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.715 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:49:53 compute-0 nova_compute[189387]: 2025-11-26 23:49:53.715 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:49:54 compute-0 podman[255333]: 2025-11-26 23:49:54.843822278 +0000 UTC m=+0.102456906 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, version=9.6, architecture=x86_64, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-type=git, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 23:49:54 compute-0 podman[255323]: 2025-11-26 23:49:54.860167265 +0000 UTC m=+0.140901443 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, version=9.4, managed_by=edpm_ansible, name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, distribution-scope=public, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 23:49:54 compute-0 podman[255326]: 2025-11-26 23:49:54.869653489 +0000 UTC m=+0.136585968 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:49:54 compute-0 podman[255324]: 2025-11-26 23:49:54.874121348 +0000 UTC m=+0.148655510 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 23:49:54 compute-0 podman[255325]: 2025-11-26 23:49:54.874845357 +0000 UTC m=+0.131936023 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:49:54 compute-0 podman[255329]: 2025-11-26 23:49:54.875415202 +0000 UTC m=+0.133961547 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:49:55 compute-0 nova_compute[189387]: 2025-11-26 23:49:55.876 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:56 compute-0 nova_compute[189387]: 2025-11-26 23:49:56.712 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:56 compute-0 nova_compute[189387]: 2025-11-26 23:49:56.713 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:56 compute-0 nova_compute[189387]: 2025-11-26 23:49:56.714 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:56 compute-0 nova_compute[189387]: 2025-11-26 23:49:56.795 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:49:57 compute-0 nova_compute[189387]: 2025-11-26 23:49:57.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:57 compute-0 nova_compute[189387]: 2025-11-26 23:49:57.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:49:59 compute-0 podman[203621]: time="2025-11-26T23:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:49:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:49:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Nov 26 23:50:00 compute-0 nova_compute[189387]: 2025-11-26 23:50:00.880 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:01 compute-0 openstack_network_exporter[205787]: ERROR   23:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:50:01 compute-0 openstack_network_exporter[205787]: ERROR   23:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:50:01 compute-0 openstack_network_exporter[205787]: ERROR   23:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:50:01 compute-0 openstack_network_exporter[205787]: ERROR   23:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:50:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:50:01 compute-0 openstack_network_exporter[205787]: ERROR   23:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:50:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:50:01 compute-0 nova_compute[189387]: 2025-11-26 23:50:01.799 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:04 compute-0 nova_compute[189387]: 2025-11-26 23:50:04.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:05 compute-0 nova_compute[189387]: 2025-11-26 23:50:05.884 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:06 compute-0 nova_compute[189387]: 2025-11-26 23:50:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:06 compute-0 nova_compute[189387]: 2025-11-26 23:50:06.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:50:06 compute-0 nova_compute[189387]: 2025-11-26 23:50:06.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:06 compute-0 podman[255441]: 2025-11-26 23:50:06.79506817 +0000 UTC m=+0.096436045 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:50:06 compute-0 nova_compute[189387]: 2025-11-26 23:50:06.803 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:06 compute-0 nova_compute[189387]: 2025-11-26 23:50:06.969 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.002 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Triggering sync for uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.003 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Triggering sync for uuid b7d5e999-38ca-46e8-b572-cc9fad0fc2cc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.004 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.004 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.005 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.006 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.033 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:50:07 compute-0 nova_compute[189387]: 2025-11-26 23:50:07.034 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:50:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:50:09.659 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:50:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:50:09.660 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:50:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:50:09.661 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:50:10 compute-0 podman[255461]: 2025-11-26 23:50:10.840860304 +0000 UTC m=+0.120850457 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:50:10 compute-0 nova_compute[189387]: 2025-11-26 23:50:10.888 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:11 compute-0 nova_compute[189387]: 2025-11-26 23:50:11.806 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:15 compute-0 nova_compute[189387]: 2025-11-26 23:50:15.891 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:16 compute-0 nova_compute[189387]: 2025-11-26 23:50:16.809 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:17 compute-0 systemd[1]: Starting dnf makecache...
Nov 26 23:50:17 compute-0 podman[255486]: 2025-11-26 23:50:17.831986264 +0000 UTC m=+0.116005018 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:50:17 compute-0 dnf[255487]: Metadata cache refreshed recently.
Nov 26 23:50:17 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 26 23:50:17 compute-0 systemd[1]: Finished dnf makecache.
Nov 26 23:50:20 compute-0 nova_compute[189387]: 2025-11-26 23:50:20.894 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:21 compute-0 nova_compute[189387]: 2025-11-26 23:50:21.812 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:25 compute-0 podman[255520]: 2025-11-26 23:50:25.837775631 +0000 UTC m=+0.097141525 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:50:25 compute-0 podman[255507]: 2025-11-26 23:50:25.851730814 +0000 UTC m=+0.142241019 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 23:50:25 compute-0 podman[255531]: 2025-11-26 23:50:25.855865264 +0000 UTC m=+0.106974587 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vendor=Red Hat, Inc., name=ubi9-minimal)
Nov 26 23:50:25 compute-0 podman[255508]: 2025-11-26 23:50:25.857550619 +0000 UTC m=+0.126182629 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:50:25 compute-0 podman[255514]: 2025-11-26 23:50:25.863367574 +0000 UTC m=+0.126384275 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 23:50:25 compute-0 podman[255506]: 2025-11-26 23:50:25.872199619 +0000 UTC m=+0.156817037 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, config_id=edpm, release=1214.1726694543, version=9.4, container_name=kepler)
Nov 26 23:50:25 compute-0 nova_compute[189387]: 2025-11-26 23:50:25.895 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:26 compute-0 nova_compute[189387]: 2025-11-26 23:50:26.816 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:29 compute-0 podman[203621]: time="2025-11-26T23:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:50:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:50:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Nov 26 23:50:30 compute-0 nova_compute[189387]: 2025-11-26 23:50:30.896 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:31 compute-0 openstack_network_exporter[205787]: ERROR   23:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:50:31 compute-0 openstack_network_exporter[205787]: ERROR   23:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:50:31 compute-0 openstack_network_exporter[205787]: ERROR   23:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:50:31 compute-0 openstack_network_exporter[205787]: ERROR   23:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:50:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:50:31 compute-0 openstack_network_exporter[205787]: ERROR   23:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:50:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:50:31 compute-0 nova_compute[189387]: 2025-11-26 23:50:31.819 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:35 compute-0 nova_compute[189387]: 2025-11-26 23:50:35.899 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:36 compute-0 nova_compute[189387]: 2025-11-26 23:50:36.824 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.851 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.851 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.852 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.861 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc', 'name': 'te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.864 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'name': 'te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.865 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.865 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.865 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.866 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.866 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:50:36.865372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:50:36.866417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.872 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.876 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.877 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.877 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.878 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.879 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:50:36.877700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.879 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.879 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.879 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.880 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:50:36.878584) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:50:36.880045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.910 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/cpu volume: 171180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.941 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/cpu volume: 332990000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.942 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.942 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.942 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.943 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.943 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:50:36.942394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.944 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:50:36.943514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.960 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.961 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.977 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.977 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.977 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.978 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.978 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.978 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.978 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.979 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/memory.usage volume: 43.5 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.980 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/memory.usage volume: 42.5859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.981 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.981 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.981 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.981 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.982 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.982 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.982 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.983 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.983 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.983 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.983 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:50:36.978272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.984 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.984 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.984 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.984 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.984 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.984 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.984 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:50:36.979583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:50:36.980676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:50:36.981656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:50:36.983308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:36.985 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:50:36.984553) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.023 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 30137344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.023 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.071 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.071 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.072 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.073 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.073 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.073 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.073 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:50:37.072627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.074 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.075 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 739295997 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.075 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 89632121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.075 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 968376186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.075 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 67351116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.076 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.077 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.077 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.077 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.077 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.077 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.078 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.078 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.078 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.078 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.078 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.078 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.078 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:50:37.074068) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 1090 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.079 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:50:37.075187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:50:37.076649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.081 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.081 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.081 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.081 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.081 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 73048064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:50:37.078158) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.082 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:50:37.079767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.084 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 2705910374 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.084 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.084 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 3905775346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:50:37.081288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.084 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:50:37.082892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:50:37.083935) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.086 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.087 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:50:37.085606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.087 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.087 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:50:37.086637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.088 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:50:37.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:50:37 compute-0 podman[255622]: 2025-11-26 23:50:37.850563525 +0000 UTC m=+0.138356954 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 26 23:50:40 compute-0 nova_compute[189387]: 2025-11-26 23:50:40.902 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:41 compute-0 podman[255657]: 2025-11-26 23:50:41.818732776 +0000 UTC m=+0.104186322 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:50:41 compute-0 nova_compute[189387]: 2025-11-26 23:50:41.827 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:45 compute-0 nova_compute[189387]: 2025-11-26 23:50:45.902 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:46 compute-0 nova_compute[189387]: 2025-11-26 23:50:46.832 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:48 compute-0 podman[255679]: 2025-11-26 23:50:48.781446008 +0000 UTC m=+0.080554571 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:50:49 compute-0 nova_compute[189387]: 2025-11-26 23:50:49.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:49 compute-0 nova_compute[189387]: 2025-11-26 23:50:49.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:50:50 compute-0 nova_compute[189387]: 2025-11-26 23:50:50.436 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:50:50 compute-0 nova_compute[189387]: 2025-11-26 23:50:50.437 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:50:50 compute-0 nova_compute[189387]: 2025-11-26 23:50:50.437 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:50:50 compute-0 nova_compute[189387]: 2025-11-26 23:50:50.904 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:51 compute-0 nova_compute[189387]: 2025-11-26 23:50:51.611 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:50:51 compute-0 nova_compute[189387]: 2025-11-26 23:50:51.628 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:50:51 compute-0 nova_compute[189387]: 2025-11-26 23:50:51.629 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:50:51 compute-0 nova_compute[189387]: 2025-11-26 23:50:51.631 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:51 compute-0 nova_compute[189387]: 2025-11-26 23:50:51.631 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:50:51 compute-0 nova_compute[189387]: 2025-11-26 23:50:51.836 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.153 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.154 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.155 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.266 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.365 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.366 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.467 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.476 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.555 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.558 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:50:52 compute-0 nova_compute[189387]: 2025-11-26 23:50:52.660 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.117 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.118 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4888MB free_disk=72.24824523925781GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.118 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.119 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.253 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.254 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.254 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.255 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.339 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.361 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.364 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:50:53 compute-0 nova_compute[189387]: 2025-11-26 23:50:53.365 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:50:54 compute-0 nova_compute[189387]: 2025-11-26 23:50:54.366 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:55 compute-0 nova_compute[189387]: 2025-11-26 23:50:55.906 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:56 compute-0 nova_compute[189387]: 2025-11-26 23:50:56.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:56 compute-0 nova_compute[189387]: 2025-11-26 23:50:56.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:56 compute-0 nova_compute[189387]: 2025-11-26 23:50:56.840 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:50:56 compute-0 podman[255716]: 2025-11-26 23:50:56.848822338 +0000 UTC m=+0.106068892 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 26 23:50:56 compute-0 podman[255712]: 2025-11-26 23:50:56.854456689 +0000 UTC m=+0.134640365 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, architecture=x86_64, managed_by=edpm_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, release=1214.1726694543, container_name=kepler, vendor=Red Hat, Inc.)
Nov 26 23:50:56 compute-0 podman[255714]: 2025-11-26 23:50:56.867436855 +0000 UTC m=+0.124431482 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:50:56 compute-0 podman[255721]: 2025-11-26 23:50:56.870149528 +0000 UTC m=+0.132603611 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9)
Nov 26 23:50:56 compute-0 podman[255713]: 2025-11-26 23:50:56.881607274 +0000 UTC m=+0.154695601 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:50:56 compute-0 podman[255715]: 2025-11-26 23:50:56.88334494 +0000 UTC m=+0.132179720 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:50:57 compute-0 nova_compute[189387]: 2025-11-26 23:50:57.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:58 compute-0 nova_compute[189387]: 2025-11-26 23:50:58.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:59 compute-0 nova_compute[189387]: 2025-11-26 23:50:59.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:50:59 compute-0 podman[203621]: time="2025-11-26T23:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:50:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:50:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Nov 26 23:51:00 compute-0 nova_compute[189387]: 2025-11-26 23:51:00.908 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:01 compute-0 openstack_network_exporter[205787]: ERROR   23:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:51:01 compute-0 openstack_network_exporter[205787]: ERROR   23:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:51:01 compute-0 openstack_network_exporter[205787]: ERROR   23:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:51:01 compute-0 openstack_network_exporter[205787]: ERROR   23:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:51:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:51:01 compute-0 openstack_network_exporter[205787]: ERROR   23:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:51:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:51:01 compute-0 nova_compute[189387]: 2025-11-26 23:51:01.843 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:05 compute-0 nova_compute[189387]: 2025-11-26 23:51:05.911 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:06 compute-0 nova_compute[189387]: 2025-11-26 23:51:06.847 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:08 compute-0 podman[255833]: 2025-11-26 23:51:08.787368192 +0000 UTC m=+0.078993911 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:51:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:51:09.661 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:51:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:51:09.661 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:51:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:51:09.662 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:51:10 compute-0 nova_compute[189387]: 2025-11-26 23:51:10.918 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:11 compute-0 nova_compute[189387]: 2025-11-26 23:51:11.851 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:12 compute-0 podman[255853]: 2025-11-26 23:51:12.834309365 +0000 UTC m=+0.115801553 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:51:15 compute-0 nova_compute[189387]: 2025-11-26 23:51:15.920 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:16 compute-0 nova_compute[189387]: 2025-11-26 23:51:16.856 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:19 compute-0 podman[255878]: 2025-11-26 23:51:19.840800255 +0000 UTC m=+0.127751441 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:51:20 compute-0 nova_compute[189387]: 2025-11-26 23:51:20.928 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:21 compute-0 nova_compute[189387]: 2025-11-26 23:51:21.862 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:25 compute-0 nova_compute[189387]: 2025-11-26 23:51:25.928 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:26 compute-0 nova_compute[189387]: 2025-11-26 23:51:26.868 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:27 compute-0 podman[255918]: 2025-11-26 23:51:27.859461925 +0000 UTC m=+0.102665541 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Nov 26 23:51:27 compute-0 podman[255900]: 2025-11-26 23:51:27.859924488 +0000 UTC m=+0.127029062 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:51:27 compute-0 podman[255920]: 2025-11-26 23:51:27.863397451 +0000 UTC m=+0.093144157 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41)
Nov 26 23:51:27 compute-0 podman[255898]: 2025-11-26 23:51:27.881380371 +0000 UTC m=+0.157894616 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, version=9.4, config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., io.openshift.tags=base rhel9)
Nov 26 23:51:27 compute-0 podman[255906]: 2025-11-26 23:51:27.881607507 +0000 UTC m=+0.129401845 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 26 23:51:27 compute-0 podman[255899]: 2025-11-26 23:51:27.899340721 +0000 UTC m=+0.163817705 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 23:51:29 compute-0 podman[203621]: time="2025-11-26T23:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:51:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:51:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Nov 26 23:51:30 compute-0 nova_compute[189387]: 2025-11-26 23:51:30.932 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:31 compute-0 openstack_network_exporter[205787]: ERROR   23:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:51:31 compute-0 openstack_network_exporter[205787]: ERROR   23:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:51:31 compute-0 openstack_network_exporter[205787]: ERROR   23:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:51:31 compute-0 openstack_network_exporter[205787]: ERROR   23:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:51:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:51:31 compute-0 openstack_network_exporter[205787]: ERROR   23:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:51:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:51:31 compute-0 nova_compute[189387]: 2025-11-26 23:51:31.872 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:35 compute-0 nova_compute[189387]: 2025-11-26 23:51:35.932 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:36 compute-0 nova_compute[189387]: 2025-11-26 23:51:36.876 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:39 compute-0 podman[256019]: 2025-11-26 23:51:39.828956085 +0000 UTC m=+0.118087143 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 26 23:51:40 compute-0 nova_compute[189387]: 2025-11-26 23:51:40.935 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:41 compute-0 nova_compute[189387]: 2025-11-26 23:51:41.881 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:43 compute-0 podman[256037]: 2025-11-26 23:51:43.858593207 +0000 UTC m=+0.142070383 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:51:45 compute-0 nova_compute[189387]: 2025-11-26 23:51:45.937 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:46 compute-0 nova_compute[189387]: 2025-11-26 23:51:46.885 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:49 compute-0 nova_compute[189387]: 2025-11-26 23:51:49.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:49 compute-0 nova_compute[189387]: 2025-11-26 23:51:49.127 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.128 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.130 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.131 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.461 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.462 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.462 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.462 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:51:50 compute-0 podman[256062]: 2025-11-26 23:51:50.842900995 +0000 UTC m=+0.122606034 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Nov 26 23:51:50 compute-0 nova_compute[189387]: 2025-11-26 23:51:50.939 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:51 compute-0 nova_compute[189387]: 2025-11-26 23:51:51.890 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:51 compute-0 nova_compute[189387]: 2025-11-26 23:51:51.948 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:51:51 compute-0 nova_compute[189387]: 2025-11-26 23:51:51.969 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:51:51 compute-0 nova_compute[189387]: 2025-11-26 23:51:51.971 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.156 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.158 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.159 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.161 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.286 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.389 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.392 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.477 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.487 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.554 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.556 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:51:52 compute-0 nova_compute[189387]: 2025-11-26 23:51:52.623 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.029 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.032 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4883MB free_disk=72.24820327758789GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.033 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.034 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.133 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.135 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.135 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.136 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.215 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.233 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.235 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:51:53 compute-0 nova_compute[189387]: 2025-11-26 23:51:53.236 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:51:55 compute-0 nova_compute[189387]: 2025-11-26 23:51:55.944 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:56 compute-0 nova_compute[189387]: 2025-11-26 23:51:56.238 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:56 compute-0 nova_compute[189387]: 2025-11-26 23:51:56.894 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:51:57 compute-0 nova_compute[189387]: 2025-11-26 23:51:57.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:57 compute-0 nova_compute[189387]: 2025-11-26 23:51:57.129 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:58 compute-0 nova_compute[189387]: 2025-11-26 23:51:58.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:58 compute-0 podman[256099]: 2025-11-26 23:51:58.848869666 +0000 UTC m=+0.118676048 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:51:58 compute-0 podman[256106]: 2025-11-26 23:51:58.851774144 +0000 UTC m=+0.111162218 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:51:58 compute-0 podman[256092]: 2025-11-26 23:51:58.856456809 +0000 UTC m=+0.141742184 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, name=ubi9, build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543)
Nov 26 23:51:58 compute-0 podman[256093]: 2025-11-26 23:51:58.872264951 +0000 UTC m=+0.148863515 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Nov 26 23:51:58 compute-0 podman[256115]: 2025-11-26 23:51:58.880917852 +0000 UTC m=+0.133295819 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 23:51:58 compute-0 podman[256124]: 2025-11-26 23:51:58.890418046 +0000 UTC m=+0.128759999 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, version=9.6, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7)
Nov 26 23:51:59 compute-0 nova_compute[189387]: 2025-11-26 23:51:59.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:51:59 compute-0 podman[203621]: time="2025-11-26T23:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:51:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:51:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4817 "" "Go-http-client/1.1"
Nov 26 23:52:00 compute-0 nova_compute[189387]: 2025-11-26 23:52:00.945 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:01 compute-0 nova_compute[189387]: 2025-11-26 23:52:01.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:01 compute-0 openstack_network_exporter[205787]: ERROR   23:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:52:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:52:01 compute-0 openstack_network_exporter[205787]: ERROR   23:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:52:01 compute-0 openstack_network_exporter[205787]: ERROR   23:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:52:01 compute-0 openstack_network_exporter[205787]: ERROR   23:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:52:01 compute-0 openstack_network_exporter[205787]: ERROR   23:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:52:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:52:01 compute-0 nova_compute[189387]: 2025-11-26 23:52:01.898 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:05 compute-0 nova_compute[189387]: 2025-11-26 23:52:05.950 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:06 compute-0 nova_compute[189387]: 2025-11-26 23:52:06.902 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:09 compute-0 nova_compute[189387]: 2025-11-26 23:52:09.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:52:09.662 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:52:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:52:09.663 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:52:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:52:09.664 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:52:10 compute-0 podman[256210]: 2025-11-26 23:52:10.803987488 +0000 UTC m=+0.100015546 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, tcib_managed=true, managed_by=edpm_ansible)
Nov 26 23:52:10 compute-0 nova_compute[189387]: 2025-11-26 23:52:10.951 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:11 compute-0 nova_compute[189387]: 2025-11-26 23:52:11.906 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:14 compute-0 podman[256230]: 2025-11-26 23:52:14.770251998 +0000 UTC m=+0.084525134 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:52:15 compute-0 nova_compute[189387]: 2025-11-26 23:52:15.954 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:16 compute-0 nova_compute[189387]: 2025-11-26 23:52:16.909 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:20 compute-0 nova_compute[189387]: 2025-11-26 23:52:20.957 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:21 compute-0 podman[256254]: 2025-11-26 23:52:21.805735516 +0000 UTC m=+0.099272885 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:52:21 compute-0 nova_compute[189387]: 2025-11-26 23:52:21.913 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:25 compute-0 nova_compute[189387]: 2025-11-26 23:52:25.959 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:26 compute-0 nova_compute[189387]: 2025-11-26 23:52:26.917 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:29 compute-0 podman[203621]: time="2025-11-26T23:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:52:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:52:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 26 23:52:29 compute-0 podman[256281]: 2025-11-26 23:52:29.826797362 +0000 UTC m=+0.089568707 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:52:29 compute-0 podman[256276]: 2025-11-26 23:52:29.83797873 +0000 UTC m=+0.122114894 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:52:29 compute-0 podman[256291]: 2025-11-26 23:52:29.83838022 +0000 UTC m=+0.098840673 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Nov 26 23:52:29 compute-0 podman[256277]: 2025-11-26 23:52:29.847217297 +0000 UTC m=+0.120007199 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:52:29 compute-0 podman[256274]: 2025-11-26 23:52:29.851827029 +0000 UTC m=+0.133359274 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, name=ubi9, vcs-type=git, config_id=edpm, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 26 23:52:29 compute-0 podman[256275]: 2025-11-26 23:52:29.887403256 +0000 UTC m=+0.174112879 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 23:52:30 compute-0 nova_compute[189387]: 2025-11-26 23:52:30.961 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:31 compute-0 openstack_network_exporter[205787]: ERROR   23:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:52:31 compute-0 openstack_network_exporter[205787]: ERROR   23:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:52:31 compute-0 openstack_network_exporter[205787]: ERROR   23:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:52:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:52:31 compute-0 openstack_network_exporter[205787]: ERROR   23:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:52:31 compute-0 openstack_network_exporter[205787]: ERROR   23:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:52:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:52:31 compute-0 nova_compute[189387]: 2025-11-26 23:52:31.921 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:35 compute-0 nova_compute[189387]: 2025-11-26 23:52:35.966 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.852 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.852 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.864 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc', 'name': 'te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.868 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'name': 'te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.869 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.871 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.871 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.872 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.872 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.872 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.872 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:52:36.870006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.873 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:52:36.872438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.878 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.884 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.885 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.885 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.886 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.886 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.887 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:52:36.886789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.888 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.888 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.888 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:52:36.888943) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.889 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.889 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.890 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.890 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.891 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:52:36.891498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.922 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/cpu volume: 291000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 nova_compute[189387]: 2025-11-26 23:52:36.924 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.958 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/cpu volume: 334730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.959 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.960 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.960 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.961 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:52:36.960751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.961 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.962 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.963 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:52:36.963615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.984 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:36.984 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.156 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.156 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.158 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.159 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.159 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.160 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.162 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.162 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.161 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:52:37.158828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.164 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.164 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/memory.usage volume: 43.5 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.165 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/memory.usage volume: 42.59375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.166 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.166 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.167 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.168 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes volume: 1520 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.169 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.170 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.170 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:52:37.162160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.171 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.171 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.172 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:52:37.164598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.173 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:52:37.167356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:52:37.171038) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:52:37.174410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.238 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 30137344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.239 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.303 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.304 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.305 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.305 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.306 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.307 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.308 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.308 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.309 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.309 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.310 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.309 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:52:37.306345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.311 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.312 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.312 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.312 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.312 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 739295997 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.312 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 89632121 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.313 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 968376186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.313 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 67351116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.314 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.314 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.314 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.314 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.315 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.315 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.315 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.316 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.316 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.316 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.316 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.317 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:52:37.309346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.317 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.318 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.318 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:52:37.312433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.319 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.319 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.319 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 1090 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.320 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.320 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.320 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:52:37.314677) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.322 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 72884224 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.322 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.322 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.323 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.324 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.324 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.324 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:52:37.316660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:52:37.319652) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.326 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 2705910374 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.327 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.327 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 4008872658 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.327 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.328 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.329 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:52:37.322126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.329 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.330 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 316 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.330 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.330 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.331 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:52:37.324467) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.331 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.331 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.332 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:52:37.326634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:52:37.328476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:52:37.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:52:37.330294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:52:40 compute-0 nova_compute[189387]: 2025-11-26 23:52:40.969 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:41 compute-0 podman[256394]: 2025-11-26 23:52:41.848909175 +0000 UTC m=+0.131595197 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Nov 26 23:52:41 compute-0 nova_compute[189387]: 2025-11-26 23:52:41.928 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:45 compute-0 podman[256413]: 2025-11-26 23:52:45.813437997 +0000 UTC m=+0.114878192 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:52:45 compute-0 nova_compute[189387]: 2025-11-26 23:52:45.975 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:46 compute-0 nova_compute[189387]: 2025-11-26 23:52:46.931 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:50 compute-0 nova_compute[189387]: 2025-11-26 23:52:50.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:50 compute-0 nova_compute[189387]: 2025-11-26 23:52:50.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:52:50 compute-0 nova_compute[189387]: 2025-11-26 23:52:50.984 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:51 compute-0 nova_compute[189387]: 2025-11-26 23:52:51.935 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:52 compute-0 nova_compute[189387]: 2025-11-26 23:52:52.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:52 compute-0 nova_compute[189387]: 2025-11-26 23:52:52.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:52:52 compute-0 nova_compute[189387]: 2025-11-26 23:52:52.573 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:52:52 compute-0 nova_compute[189387]: 2025-11-26 23:52:52.574 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:52:52 compute-0 nova_compute[189387]: 2025-11-26 23:52:52.575 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:52:52 compute-0 podman[256437]: 2025-11-26 23:52:52.832793036 +0000 UTC m=+0.126625564 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, org.label-schema.build-date=20251125)
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.695 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.711 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.712 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.713 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.740 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.741 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.741 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.742 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.821 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.919 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:52:53 compute-0 nova_compute[189387]: 2025-11-26 23:52:53.920 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.017 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.028 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.130 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.132 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.204 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.717 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.718 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4883MB free_disk=72.24822616577148GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.719 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.719 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.860 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.861 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.861 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.861 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.927 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.947 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.950 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:52:54 compute-0 nova_compute[189387]: 2025-11-26 23:52:54.951 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:52:55 compute-0 nova_compute[189387]: 2025-11-26 23:52:55.986 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:56 compute-0 nova_compute[189387]: 2025-11-26 23:52:56.938 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:52:58 compute-0 nova_compute[189387]: 2025-11-26 23:52:58.363 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:58 compute-0 nova_compute[189387]: 2025-11-26 23:52:58.364 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:59 compute-0 nova_compute[189387]: 2025-11-26 23:52:59.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:59 compute-0 nova_compute[189387]: 2025-11-26 23:52:59.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:59 compute-0 nova_compute[189387]: 2025-11-26 23:52:59.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:52:59 compute-0 podman[203621]: time="2025-11-26T23:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:52:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:52:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 26 23:53:00 compute-0 podman[256471]: 2025-11-26 23:53:00.845636133 +0000 UTC m=+0.125953196 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:53:00 compute-0 podman[256472]: 2025-11-26 23:53:00.846135727 +0000 UTC m=+0.115227471 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 26 23:53:00 compute-0 podman[256470]: 2025-11-26 23:53:00.850337079 +0000 UTC m=+0.135622984 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 26 23:53:00 compute-0 podman[256473]: 2025-11-26 23:53:00.850889614 +0000 UTC m=+0.126551162 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 26 23:53:00 compute-0 podman[256469]: 2025-11-26 23:53:00.854236212 +0000 UTC m=+0.141743307 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543)
Nov 26 23:53:00 compute-0 podman[256474]: 2025-11-26 23:53:00.86126432 +0000 UTC m=+0.129420099 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 23:53:00 compute-0 nova_compute[189387]: 2025-11-26 23:53:00.987 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:01 compute-0 openstack_network_exporter[205787]: ERROR   23:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:53:01 compute-0 openstack_network_exporter[205787]: ERROR   23:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:53:01 compute-0 openstack_network_exporter[205787]: ERROR   23:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:53:01 compute-0 openstack_network_exporter[205787]: ERROR   23:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:53:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:53:01 compute-0 openstack_network_exporter[205787]: ERROR   23:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:53:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:53:01 compute-0 nova_compute[189387]: 2025-11-26 23:53:01.942 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:02 compute-0 nova_compute[189387]: 2025-11-26 23:53:02.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:05 compute-0 nova_compute[189387]: 2025-11-26 23:53:05.990 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:06 compute-0 nova_compute[189387]: 2025-11-26 23:53:06.945 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:53:09.663 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:53:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:53:09.664 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:53:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:53:09.665 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:53:10 compute-0 nova_compute[189387]: 2025-11-26 23:53:10.993 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:11 compute-0 nova_compute[189387]: 2025-11-26 23:53:11.950 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:12 compute-0 podman[256583]: 2025-11-26 23:53:12.839600795 +0000 UTC m=+0.112667632 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 26 23:53:15 compute-0 nova_compute[189387]: 2025-11-26 23:53:15.996 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:16 compute-0 podman[256604]: 2025-11-26 23:53:16.777267523 +0000 UTC m=+0.077938058 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:53:16 compute-0 nova_compute[189387]: 2025-11-26 23:53:16.953 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:21 compute-0 nova_compute[189387]: 2025-11-26 23:53:21.000 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:21 compute-0 nova_compute[189387]: 2025-11-26 23:53:21.958 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:23 compute-0 podman[256629]: 2025-11-26 23:53:23.795615022 +0000 UTC m=+0.082428847 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:53:26 compute-0 nova_compute[189387]: 2025-11-26 23:53:26.004 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:26 compute-0 nova_compute[189387]: 2025-11-26 23:53:26.962 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:29 compute-0 podman[203621]: time="2025-11-26T23:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:53:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:53:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Nov 26 23:53:31 compute-0 nova_compute[189387]: 2025-11-26 23:53:31.008 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:31 compute-0 openstack_network_exporter[205787]: ERROR   23:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:53:31 compute-0 openstack_network_exporter[205787]: ERROR   23:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:53:31 compute-0 openstack_network_exporter[205787]: ERROR   23:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:53:31 compute-0 openstack_network_exporter[205787]: ERROR   23:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:53:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:53:31 compute-0 openstack_network_exporter[205787]: ERROR   23:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:53:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:53:31 compute-0 podman[256646]: 2025-11-26 23:53:31.848182809 +0000 UTC m=+0.126643256 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, distribution-scope=public, maintainer=Red Hat, Inc., release=1214.1726694543)
Nov 26 23:53:31 compute-0 podman[256654]: 2025-11-26 23:53:31.858936455 +0000 UTC m=+0.097737225 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Nov 26 23:53:31 compute-0 podman[256666]: 2025-11-26 23:53:31.866410554 +0000 UTC m=+0.090945804 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 23:53:31 compute-0 podman[256659]: 2025-11-26 23:53:31.887048254 +0000 UTC m=+0.120370078 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 26 23:53:31 compute-0 podman[256651]: 2025-11-26 23:53:31.88878112 +0000 UTC m=+0.129961473 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:53:31 compute-0 podman[256647]: 2025-11-26 23:53:31.893544277 +0000 UTC m=+0.150836000 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:53:31 compute-0 nova_compute[189387]: 2025-11-26 23:53:31.965 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:36 compute-0 nova_compute[189387]: 2025-11-26 23:53:36.012 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:36 compute-0 nova_compute[189387]: 2025-11-26 23:53:36.967 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:41 compute-0 nova_compute[189387]: 2025-11-26 23:53:41.013 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:41 compute-0 nova_compute[189387]: 2025-11-26 23:53:41.971 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:43 compute-0 podman[256762]: 2025-11-26 23:53:43.845953283 +0000 UTC m=+0.123032909 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Nov 26 23:53:46 compute-0 nova_compute[189387]: 2025-11-26 23:53:46.016 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:46 compute-0 nova_compute[189387]: 2025-11-26 23:53:46.976 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:47 compute-0 podman[256781]: 2025-11-26 23:53:47.786945469 +0000 UTC m=+0.078118353 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:53:51 compute-0 nova_compute[189387]: 2025-11-26 23:53:51.018 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:51 compute-0 nova_compute[189387]: 2025-11-26 23:53:51.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:51 compute-0 nova_compute[189387]: 2025-11-26 23:53:51.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:53:51 compute-0 nova_compute[189387]: 2025-11-26 23:53:51.981 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:54 compute-0 nova_compute[189387]: 2025-11-26 23:53:54.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:54 compute-0 nova_compute[189387]: 2025-11-26 23:53:54.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:53:54 compute-0 nova_compute[189387]: 2025-11-26 23:53:54.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:53:54 compute-0 nova_compute[189387]: 2025-11-26 23:53:54.518 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:53:54 compute-0 nova_compute[189387]: 2025-11-26 23:53:54.518 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:53:54 compute-0 nova_compute[189387]: 2025-11-26 23:53:54.519 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:53:54 compute-0 nova_compute[189387]: 2025-11-26 23:53:54.520 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:53:54 compute-0 podman[256805]: 2025-11-26 23:53:54.852502943 +0000 UTC m=+0.131922146 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.917 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.941 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.942 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.943 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.982 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.983 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.983 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:53:55 compute-0 nova_compute[189387]: 2025-11-26 23:53:55.983 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.021 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.088 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.159 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.160 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.251 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.259 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.331 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.333 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.394 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.753 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.755 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=72.24777603149414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.755 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.755 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.861 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.862 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.862 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.862 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.945 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.961 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.963 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.963 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.208s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:53:56 compute-0 nova_compute[189387]: 2025-11-26 23:53:56.985 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:53:58 compute-0 nova_compute[189387]: 2025-11-26 23:53:58.144 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:58 compute-0 nova_compute[189387]: 2025-11-26 23:53:58.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:59 compute-0 nova_compute[189387]: 2025-11-26 23:53:59.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:59 compute-0 nova_compute[189387]: 2025-11-26 23:53:59.122 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:59 compute-0 nova_compute[189387]: 2025-11-26 23:53:59.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:53:59 compute-0 podman[203621]: time="2025-11-26T23:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:53:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:53:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4813 "" "Go-http-client/1.1"
Nov 26 23:54:01 compute-0 nova_compute[189387]: 2025-11-26 23:54:01.024 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:01 compute-0 openstack_network_exporter[205787]: ERROR   23:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:54:01 compute-0 openstack_network_exporter[205787]: ERROR   23:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:54:01 compute-0 openstack_network_exporter[205787]: ERROR   23:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:54:01 compute-0 openstack_network_exporter[205787]: ERROR   23:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:54:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:54:01 compute-0 openstack_network_exporter[205787]: ERROR   23:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:54:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:54:01 compute-0 nova_compute[189387]: 2025-11-26 23:54:01.989 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:02 compute-0 podman[256858]: 2025-11-26 23:54:02.848423021 +0000 UTC m=+0.083526296 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=)
Nov 26 23:54:02 compute-0 podman[256836]: 2025-11-26 23:54:02.857682718 +0000 UTC m=+0.128303280 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git)
Nov 26 23:54:02 compute-0 podman[256838]: 2025-11-26 23:54:02.868916657 +0000 UTC m=+0.137104584 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:54:02 compute-0 podman[256839]: 2025-11-26 23:54:02.877124136 +0000 UTC m=+0.119345300 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 23:54:02 compute-0 podman[256849]: 2025-11-26 23:54:02.877184847 +0000 UTC m=+0.107928296 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:54:02 compute-0 podman[256837]: 2025-11-26 23:54:02.888285173 +0000 UTC m=+0.147596724 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 26 23:54:04 compute-0 nova_compute[189387]: 2025-11-26 23:54:04.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:06 compute-0 nova_compute[189387]: 2025-11-26 23:54:06.028 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:06 compute-0 nova_compute[189387]: 2025-11-26 23:54:06.993 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:54:09.664 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:54:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:54:09.665 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:54:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:54:09.665 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:54:11 compute-0 nova_compute[189387]: 2025-11-26 23:54:11.030 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:11 compute-0 nova_compute[189387]: 2025-11-26 23:54:11.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:11 compute-0 nova_compute[189387]: 2025-11-26 23:54:11.996 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:14 compute-0 podman[256960]: 2025-11-26 23:54:14.820844608 +0000 UTC m=+0.115672262 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:54:16 compute-0 nova_compute[189387]: 2025-11-26 23:54:16.031 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:17 compute-0 nova_compute[189387]: 2025-11-26 23:54:16.999 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:18 compute-0 podman[256979]: 2025-11-26 23:54:18.795984994 +0000 UTC m=+0.085645973 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:54:21 compute-0 nova_compute[189387]: 2025-11-26 23:54:21.035 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:22 compute-0 nova_compute[189387]: 2025-11-26 23:54:22.003 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:25 compute-0 podman[257004]: 2025-11-26 23:54:25.850778046 +0000 UTC m=+0.148751284 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Nov 26 23:54:26 compute-0 nova_compute[189387]: 2025-11-26 23:54:26.039 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:27 compute-0 nova_compute[189387]: 2025-11-26 23:54:27.007 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:29 compute-0 podman[203621]: time="2025-11-26T23:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:54:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:54:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Nov 26 23:54:31 compute-0 nova_compute[189387]: 2025-11-26 23:54:31.040 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:31 compute-0 openstack_network_exporter[205787]: ERROR   23:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:54:31 compute-0 openstack_network_exporter[205787]: ERROR   23:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:54:31 compute-0 openstack_network_exporter[205787]: ERROR   23:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:54:31 compute-0 openstack_network_exporter[205787]: ERROR   23:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:54:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:54:31 compute-0 openstack_network_exporter[205787]: ERROR   23:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:54:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:54:32 compute-0 nova_compute[189387]: 2025-11-26 23:54:32.009 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:33 compute-0 podman[257023]: 2025-11-26 23:54:33.842948867 +0000 UTC m=+0.114976144 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, distribution-scope=public, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 26 23:54:33 compute-0 podman[257045]: 2025-11-26 23:54:33.846108131 +0000 UTC m=+0.087864361 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1755695350)
Nov 26 23:54:33 compute-0 podman[257026]: 2025-11-26 23:54:33.858996794 +0000 UTC m=+0.111773198 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 26 23:54:33 compute-0 podman[257025]: 2025-11-26 23:54:33.865430306 +0000 UTC m=+0.131862864 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:54:33 compute-0 podman[257024]: 2025-11-26 23:54:33.879134111 +0000 UTC m=+0.141102420 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 26 23:54:33 compute-0 podman[257039]: 2025-11-26 23:54:33.884862824 +0000 UTC m=+0.118673853 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125)
Nov 26 23:54:36 compute-0 nova_compute[189387]: 2025-11-26 23:54:36.043 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.852 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.853 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.853 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.862 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc', 'name': 'te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.866 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'name': 'te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.866 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.866 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.866 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.868 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.869 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.869 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.869 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:54:36.866990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.869 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.869 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:54:36.869570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.873 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.877 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.878 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.878 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.878 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.878 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.879 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.879 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.880 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.880 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:54:36.879137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.880 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.880 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.881 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.881 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.881 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.882 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.882 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.883 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:54:36.881037) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.883 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.883 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.883 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.883 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.884 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:54:36.883678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.912 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/cpu volume: 332680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.951 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/cpu volume: 336500000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.952 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.953 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.953 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.953 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.954 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.955 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.956 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:54:36.953659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.956 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.956 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:54:36.956598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.976 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.976 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.998 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.998 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:36.999 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.000 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.001 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.002 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.002 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.002 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.002 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.002 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:54:37.000439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.003 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.003 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:54:37.002760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.003 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.004 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.004 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.004 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.005 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.005 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.005 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/memory.usage volume: 42.47265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.006 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/memory.usage volume: 42.59375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:54:37.005385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.006 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.007 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.007 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.007 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.007 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.008 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:54:37.007993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.008 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.008 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.008 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.009 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.010 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.010 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.010 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.010 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.010 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.011 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.011 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:54:37.010939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.011 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.012 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.012 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.013 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.013 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.013 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.013 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.014 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:54:37.013781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 nova_compute[189387]: 2025-11-26 23:54:37.015 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.072 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 31050240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.073 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.129 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.129 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.132 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.132 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.133 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.134 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.134 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.134 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.134 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.135 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.137 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.137 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:54:37.131875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:54:37.134752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.138 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.138 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.138 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.139 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 804445695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.139 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 98873591 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.140 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 968376186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.140 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 67351116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:54:37.138762) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.142 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.142 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:54:37.142651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.143 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.144 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.144 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.145 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.146 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.147 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.147 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.148 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.149 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:54:37.147564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.150 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:54:37.153251) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.153 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 1131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.154 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.155 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.156 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.159 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:54:37.159323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.160 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.161 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.161 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.162 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.163 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.163 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.164 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.164 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:54:37.162854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.165 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 2813639875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.165 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.165 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 4008872658 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.166 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.167 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.167 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.168 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.168 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.168 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.168 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.168 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.169 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:54:37.165124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:54:37.166850) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:54:37.167951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.169 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.169 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.169 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.170 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:54:37.171 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:54:41 compute-0 nova_compute[189387]: 2025-11-26 23:54:41.046 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:42 compute-0 nova_compute[189387]: 2025-11-26 23:54:42.018 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:45 compute-0 podman[257140]: 2025-11-26 23:54:45.79816518 +0000 UTC m=+0.089654200 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 23:54:46 compute-0 nova_compute[189387]: 2025-11-26 23:54:46.049 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:47 compute-0 nova_compute[189387]: 2025-11-26 23:54:47.022 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:49 compute-0 nova_compute[189387]: 2025-11-26 23:54:49.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:49 compute-0 nova_compute[189387]: 2025-11-26 23:54:49.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:54:49 compute-0 nova_compute[189387]: 2025-11-26 23:54:49.152 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:54:49 compute-0 podman[257162]: 2025-11-26 23:54:49.819000792 +0000 UTC m=+0.108225184 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:54:51 compute-0 nova_compute[189387]: 2025-11-26 23:54:51.052 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:52 compute-0 nova_compute[189387]: 2025-11-26 23:54:52.027 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:53 compute-0 nova_compute[189387]: 2025-11-26 23:54:53.155 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:53 compute-0 nova_compute[189387]: 2025-11-26 23:54:53.156 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:54:54 compute-0 nova_compute[189387]: 2025-11-26 23:54:54.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:54 compute-0 nova_compute[189387]: 2025-11-26 23:54:54.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:54:54 compute-0 nova_compute[189387]: 2025-11-26 23:54:54.522 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:54:54 compute-0 nova_compute[189387]: 2025-11-26 23:54:54.523 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:54:54 compute-0 nova_compute[189387]: 2025-11-26 23:54:54.523 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.891 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.915 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.916 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.917 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.951 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.951 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.952 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:54:55 compute-0 nova_compute[189387]: 2025-11-26 23:54:55.953 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.040 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.059 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.106 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.107 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.180 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.186 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.246 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.247 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.309 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.633 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.634 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4885MB free_disk=72.24779510498047GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.635 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.636 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:54:56 compute-0 podman[257197]: 2025-11-26 23:54:56.788708748 +0000 UTC m=+0.087140032 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.828 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.828 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.829 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.829 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.881 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.963 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.963 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.976 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:54:56 compute-0 nova_compute[189387]: 2025-11-26 23:54:56.999 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:54:57 compute-0 nova_compute[189387]: 2025-11-26 23:54:57.032 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:54:57 compute-0 nova_compute[189387]: 2025-11-26 23:54:57.059 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:54:57 compute-0 nova_compute[189387]: 2025-11-26 23:54:57.073 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:54:57 compute-0 nova_compute[189387]: 2025-11-26 23:54:57.075 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:54:57 compute-0 nova_compute[189387]: 2025-11-26 23:54:57.075 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.439s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:54:59 compute-0 nova_compute[189387]: 2025-11-26 23:54:59.283 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:59 compute-0 nova_compute[189387]: 2025-11-26 23:54:59.284 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:54:59 compute-0 podman[203621]: time="2025-11-26T23:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:54:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:54:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4816 "" "Go-http-client/1.1"
Nov 26 23:55:01 compute-0 nova_compute[189387]: 2025-11-26 23:55:01.058 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:01 compute-0 nova_compute[189387]: 2025-11-26 23:55:01.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:01 compute-0 nova_compute[189387]: 2025-11-26 23:55:01.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:01 compute-0 nova_compute[189387]: 2025-11-26 23:55:01.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:01 compute-0 openstack_network_exporter[205787]: ERROR   23:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:55:01 compute-0 openstack_network_exporter[205787]: ERROR   23:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:55:01 compute-0 openstack_network_exporter[205787]: ERROR   23:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:55:01 compute-0 openstack_network_exporter[205787]: ERROR   23:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:55:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:55:01 compute-0 openstack_network_exporter[205787]: ERROR   23:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:55:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:55:02 compute-0 nova_compute[189387]: 2025-11-26 23:55:02.035 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:04 compute-0 podman[257214]: 2025-11-26 23:55:04.836936351 +0000 UTC m=+0.117106190 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Nov 26 23:55:04 compute-0 podman[257216]: 2025-11-26 23:55:04.844687078 +0000 UTC m=+0.104486225 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:55:04 compute-0 podman[257221]: 2025-11-26 23:55:04.859150463 +0000 UTC m=+0.103891858 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:55:04 compute-0 podman[257229]: 2025-11-26 23:55:04.860698675 +0000 UTC m=+0.093880193 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 26 23:55:04 compute-0 podman[257223]: 2025-11-26 23:55:04.866695385 +0000 UTC m=+0.107412623 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 26 23:55:04 compute-0 podman[257215]: 2025-11-26 23:55:04.898626815 +0000 UTC m=+0.169555938 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 26 23:55:06 compute-0 nova_compute[189387]: 2025-11-26 23:55:06.063 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:06 compute-0 nova_compute[189387]: 2025-11-26 23:55:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:07 compute-0 nova_compute[189387]: 2025-11-26 23:55:07.039 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:55:09.665 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:55:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:55:09.666 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:55:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:55:09.667 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:55:11 compute-0 nova_compute[189387]: 2025-11-26 23:55:11.064 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:12 compute-0 nova_compute[189387]: 2025-11-26 23:55:12.043 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:16 compute-0 nova_compute[189387]: 2025-11-26 23:55:16.066 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:16 compute-0 podman[257336]: 2025-11-26 23:55:16.865598437 +0000 UTC m=+0.146391350 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 26 23:55:17 compute-0 nova_compute[189387]: 2025-11-26 23:55:17.047 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:18 compute-0 nova_compute[189387]: 2025-11-26 23:55:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:18 compute-0 nova_compute[189387]: 2025-11-26 23:55:18.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 26 23:55:20 compute-0 nova_compute[189387]: 2025-11-26 23:55:20.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:20 compute-0 podman[257357]: 2025-11-26 23:55:20.854200062 +0000 UTC m=+0.136437805 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:55:21 compute-0 nova_compute[189387]: 2025-11-26 23:55:21.068 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:22 compute-0 nova_compute[189387]: 2025-11-26 23:55:22.051 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:26 compute-0 nova_compute[189387]: 2025-11-26 23:55:26.072 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:27 compute-0 nova_compute[189387]: 2025-11-26 23:55:27.057 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:27 compute-0 podman[257381]: 2025-11-26 23:55:27.833204406 +0000 UTC m=+0.112429287 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm)
Nov 26 23:55:29 compute-0 podman[203621]: time="2025-11-26T23:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:55:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:55:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4819 "" "Go-http-client/1.1"
Nov 26 23:55:31 compute-0 nova_compute[189387]: 2025-11-26 23:55:31.077 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:31 compute-0 openstack_network_exporter[205787]: ERROR   23:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:55:31 compute-0 openstack_network_exporter[205787]: ERROR   23:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:55:31 compute-0 openstack_network_exporter[205787]: ERROR   23:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:55:31 compute-0 openstack_network_exporter[205787]: ERROR   23:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:55:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:55:31 compute-0 openstack_network_exporter[205787]: ERROR   23:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:55:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:55:32 compute-0 nova_compute[189387]: 2025-11-26 23:55:32.061 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:35 compute-0 podman[257399]: 2025-11-26 23:55:35.849051534 +0000 UTC m=+0.098446404 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 26 23:55:35 compute-0 podman[257402]: 2025-11-26 23:55:35.874725058 +0000 UTC m=+0.112927260 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 26 23:55:35 compute-0 podman[257401]: 2025-11-26 23:55:35.877716947 +0000 UTC m=+0.122875304 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:55:35 compute-0 podman[257403]: 2025-11-26 23:55:35.883338317 +0000 UTC m=+0.111995475 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 26 23:55:35 compute-0 podman[257409]: 2025-11-26 23:55:35.892568614 +0000 UTC m=+0.119933267 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, distribution-scope=public, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:55:35 compute-0 podman[257400]: 2025-11-26 23:55:35.918619287 +0000 UTC m=+0.161258287 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 26 23:55:36 compute-0 nova_compute[189387]: 2025-11-26 23:55:36.079 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:37 compute-0 nova_compute[189387]: 2025-11-26 23:55:37.064 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:41 compute-0 nova_compute[189387]: 2025-11-26 23:55:41.081 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:42 compute-0 nova_compute[189387]: 2025-11-26 23:55:42.068 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:46 compute-0 nova_compute[189387]: 2025-11-26 23:55:46.085 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:47 compute-0 nova_compute[189387]: 2025-11-26 23:55:47.072 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:47 compute-0 podman[257515]: 2025-11-26 23:55:47.839517036 +0000 UTC m=+0.129280825 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 26 23:55:51 compute-0 nova_compute[189387]: 2025-11-26 23:55:51.088 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:51 compute-0 podman[257535]: 2025-11-26 23:55:51.842949638 +0000 UTC m=+0.117629555 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:55:52 compute-0 nova_compute[189387]: 2025-11-26 23:55:52.077 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:53 compute-0 nova_compute[189387]: 2025-11-26 23:55:53.139 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:53 compute-0 nova_compute[189387]: 2025-11-26 23:55:53.139 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.157 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.157 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.158 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.158 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.268 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.361 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.362 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.457 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.464 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.564 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.566 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:55:55 compute-0 nova_compute[189387]: 2025-11-26 23:55:55.640 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.007 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.008 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4889MB free_disk=72.24784088134766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.009 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.009 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.090 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.103 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.104 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.105 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.105 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.187 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.203 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.207 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:55:56 compute-0 nova_compute[189387]: 2025-11-26 23:55:56.207 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.081 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.210 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.210 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.210 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.562 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.564 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.565 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:55:57 compute-0 nova_compute[189387]: 2025-11-26 23:55:57.566 189391 DEBUG nova.objects.instance [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:55:58 compute-0 podman[257570]: 2025-11-26 23:55:58.816959508 +0000 UTC m=+0.110490764 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4)
Nov 26 23:55:58 compute-0 nova_compute[189387]: 2025-11-26 23:55:58.914 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [{"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:55:58 compute-0 nova_compute[189387]: 2025-11-26 23:55:58.933 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-0449208f-d12b-40cb-aa71-6f67f687cb6f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:55:58 compute-0 nova_compute[189387]: 2025-11-26 23:55:58.934 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:55:59 compute-0 nova_compute[189387]: 2025-11-26 23:55:59.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:59 compute-0 nova_compute[189387]: 2025-11-26 23:55:59.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:55:59 compute-0 podman[203621]: time="2025-11-26T23:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:55:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:55:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Nov 26 23:56:01 compute-0 nova_compute[189387]: 2025-11-26 23:56:01.094 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:01 compute-0 nova_compute[189387]: 2025-11-26 23:56:01.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:01 compute-0 nova_compute[189387]: 2025-11-26 23:56:01.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:01 compute-0 openstack_network_exporter[205787]: ERROR   23:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:56:01 compute-0 openstack_network_exporter[205787]: ERROR   23:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:56:01 compute-0 openstack_network_exporter[205787]: ERROR   23:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:56:01 compute-0 openstack_network_exporter[205787]: ERROR   23:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:56:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:56:01 compute-0 openstack_network_exporter[205787]: ERROR   23:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:56:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:56:02 compute-0 nova_compute[189387]: 2025-11-26 23:56:02.084 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:02 compute-0 nova_compute[189387]: 2025-11-26 23:56:02.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:06 compute-0 nova_compute[189387]: 2025-11-26 23:56:06.097 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:06 compute-0 podman[257598]: 2025-11-26 23:56:06.817157677 +0000 UTC m=+0.081701297 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 26 23:56:06 compute-0 podman[257604]: 2025-11-26 23:56:06.830647097 +0000 UTC m=+0.084919354 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:56:06 compute-0 podman[257590]: 2025-11-26 23:56:06.840777696 +0000 UTC m=+0.123866970 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.29.0, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, managed_by=edpm_ansible, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, release-0.7.12=, vcs-type=git)
Nov 26 23:56:06 compute-0 podman[257591]: 2025-11-26 23:56:06.867614211 +0000 UTC m=+0.143236047 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:56:06 compute-0 podman[257592]: 2025-11-26 23:56:06.869206185 +0000 UTC m=+0.136310274 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:56:06 compute-0 podman[257614]: 2025-11-26 23:56:06.87202252 +0000 UTC m=+0.101344442 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, config_id=edpm)
Nov 26 23:56:07 compute-0 nova_compute[189387]: 2025-11-26 23:56:07.087 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:07 compute-0 nova_compute[189387]: 2025-11-26 23:56:07.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:56:09.666 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:56:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:56:09.666 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:56:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:56:09.667 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:56:11 compute-0 nova_compute[189387]: 2025-11-26 23:56:11.101 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:12 compute-0 nova_compute[189387]: 2025-11-26 23:56:12.094 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:15 compute-0 nova_compute[189387]: 2025-11-26 23:56:15.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:16 compute-0 nova_compute[189387]: 2025-11-26 23:56:16.103 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:17 compute-0 nova_compute[189387]: 2025-11-26 23:56:17.097 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:18 compute-0 podman[257706]: 2025-11-26 23:56:18.796136497 +0000 UTC m=+0.097556730 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:56:21 compute-0 nova_compute[189387]: 2025-11-26 23:56:21.106 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:22 compute-0 nova_compute[189387]: 2025-11-26 23:56:22.100 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:22 compute-0 podman[257726]: 2025-11-26 23:56:22.804863006 +0000 UTC m=+0.091322454 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:56:26 compute-0 nova_compute[189387]: 2025-11-26 23:56:26.110 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:27 compute-0 nova_compute[189387]: 2025-11-26 23:56:27.104 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:29 compute-0 podman[203621]: time="2025-11-26T23:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:56:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:56:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Nov 26 23:56:29 compute-0 podman[257751]: 2025-11-26 23:56:29.811466823 +0000 UTC m=+0.097040106 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 26 23:56:31 compute-0 nova_compute[189387]: 2025-11-26 23:56:31.112 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:31 compute-0 openstack_network_exporter[205787]: ERROR   23:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:56:31 compute-0 openstack_network_exporter[205787]: ERROR   23:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:56:31 compute-0 openstack_network_exporter[205787]: ERROR   23:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:56:31 compute-0 openstack_network_exporter[205787]: ERROR   23:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:56:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:56:31 compute-0 openstack_network_exporter[205787]: ERROR   23:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:56:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:56:32 compute-0 nova_compute[189387]: 2025-11-26 23:56:32.107 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:36 compute-0 nova_compute[189387]: 2025-11-26 23:56:36.115 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.853 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.854 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.854 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.865 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc', 'name': 'te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000f', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.870 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'name': 'te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di', 'flavor': {'id': 'a4234b2d-ed51-4e17-ad57-a8fb6154451b', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'aa1a3d84-3b07-42eb-bb8c-755851616ed6'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '717a3950b66241768222cb5d4ba3291e', 'user_id': '5715267a6ec9422aa9b3ef4a2956aa77', 'hostId': '27d3802b1abe41bf2d1abd490eb0aa08acfb598924ded34a7e1a15fc', 'status': 'active', 'metadata': {'metering.server_group': '92e43243-aca7-437e-ae08-bcb42a48e489'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.871 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.871 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.873 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.872 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-26T23:56:36.871768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.873 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.873 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.875 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-26T23:56:36.874323) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.880 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.886 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.887 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.887 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.887 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.887 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.888 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.888 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.889 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-26T23:56:36.888454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.889 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.890 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.890 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.890 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-26T23:56:36.890982) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.891 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.892 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.893 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.893 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-26T23:56:36.894385) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.925 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/cpu volume: 334470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.959 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/cpu volume: 338250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.959 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.960 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.960 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.960 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.960 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.961 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.961 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-26T23:56:36.960990) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.962 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.962 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.963 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.964 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-26T23:56:36.963824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.984 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:36.984 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.003 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.004 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.005 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.005 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.006 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.007 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-26T23:56:37.006341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.007 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.008 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.008 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.009 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-26T23:56:37.009724) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.010 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.010 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.012 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.013 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/memory.usage volume: 42.49609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.013 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-26T23:56:37.012701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.013 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/memory.usage volume: 42.59375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.014 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.014 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.014 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.014 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.015 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.015 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.015 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.016 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes volume: 2150 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.017 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.018 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.018 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.019 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.020 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.021 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.022 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-26T23:56:37.015412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-26T23:56:37.019508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.022 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.022 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.022 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.022 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-26T23:56:37.022790) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.072 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 31050240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.072 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 nova_compute[189387]: 2025-11-26 23:56:37.111 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.118 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 30812672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.120 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.121 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.121 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.123 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.123 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-26T23:56:37.122624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.123 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.124 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.125 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.125 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.126 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-26T23:56:37.125317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.126 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-26T23:56:37.127230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.127 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 804445695 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.128 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.latency volume: 98873591 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.128 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 968376186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.128 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.latency volume: 67351116 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.129 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.129 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.129 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 30081024 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.130 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.130 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.130 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.131 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.132 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-26T23:56:37.129710) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-26T23:56:37.132525) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.133 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.133 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.134 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.135 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-26T23:56:37.135843) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.136 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 1131 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.136 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.137 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 1112 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.137 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.138 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.138 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.138 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.139 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 73191424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.139 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.140 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-26T23:56:37.139042) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.140 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.140 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.141 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.142 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.142 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.143 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-26T23:56:37.142460) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.144 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.144 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.144 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.144 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 2813639875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.145 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-26T23:56:37.144772) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.146 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 4008872658 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.146 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.147 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.147 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.147 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-26T23:56:37.147564) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.148 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.149 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.149 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-26T23:56:37.149415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.149 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 341 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.150 14 DEBUG ceilometer.compute.pollsters [-] b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.150 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 334 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.150 14 DEBUG ceilometer.compute.pollsters [-] 0449208f-d12b-40cb-aa71-6f67f687cb6f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.151 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.152 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.153 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.154 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:56:37.155 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:56:37 compute-0 podman[257772]: 2025-11-26 23:56:37.849031449 +0000 UTC m=+0.112980051 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 26 23:56:37 compute-0 podman[257779]: 2025-11-26 23:56:37.850782225 +0000 UTC m=+0.096475961 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:56:37 compute-0 podman[257784]: 2025-11-26 23:56:37.86409754 +0000 UTC m=+0.116112564 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc.)
Nov 26 23:56:37 compute-0 podman[257773]: 2025-11-26 23:56:37.870229964 +0000 UTC m=+0.127032266 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 26 23:56:37 compute-0 podman[257770]: 2025-11-26 23:56:37.879818949 +0000 UTC m=+0.160513447 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., name=ubi9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4)
Nov 26 23:56:37 compute-0 podman[257771]: 2025-11-26 23:56:37.88774108 +0000 UTC m=+0.159589423 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Nov 26 23:56:41 compute-0 nova_compute[189387]: 2025-11-26 23:56:41.117 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:42 compute-0 nova_compute[189387]: 2025-11-26 23:56:42.114 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:46 compute-0 nova_compute[189387]: 2025-11-26 23:56:46.120 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:47 compute-0 nova_compute[189387]: 2025-11-26 23:56:47.118 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:49 compute-0 podman[257890]: 2025-11-26 23:56:49.821647416 +0000 UTC m=+0.111641926 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 26 23:56:51 compute-0 nova_compute[189387]: 2025-11-26 23:56:51.123 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:52 compute-0 nova_compute[189387]: 2025-11-26 23:56:52.122 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:53 compute-0 podman[257908]: 2025-11-26 23:56:53.795382192 +0000 UTC m=+0.076928520 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.177 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.178 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.179 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.180 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.280 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.376 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.378 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.480 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.492 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.587 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.590 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 26 23:56:55 compute-0 nova_compute[189387]: 2025-11-26 23:56:55.664 189391 DEBUG oslo_concurrency.processutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.126 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.213 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.214 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4919MB free_disk=72.24784088134766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.215 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.215 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.290 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance 0449208f-d12b-40cb-aa71-6f67f687cb6f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.291 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.291 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.292 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.348 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.362 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.365 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:56:56 compute-0 nova_compute[189387]: 2025-11-26 23:56:56.366 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.151s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:56:57 compute-0 nova_compute[189387]: 2025-11-26 23:56:57.126 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:56:57 compute-0 nova_compute[189387]: 2025-11-26 23:56:57.368 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:56:57 compute-0 nova_compute[189387]: 2025-11-26 23:56:57.369 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:56:57 compute-0 nova_compute[189387]: 2025-11-26 23:56:57.850 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 26 23:56:57 compute-0 nova_compute[189387]: 2025-11-26 23:56:57.850 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquired lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 26 23:56:57 compute-0 nova_compute[189387]: 2025-11-26 23:56:57.851 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 26 23:56:58 compute-0 nova_compute[189387]: 2025-11-26 23:56:58.893 189391 DEBUG nova.network.neutron [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [{"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:56:58 compute-0 nova_compute[189387]: 2025-11-26 23:56:58.912 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Releasing lock "refresh_cache-b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 26 23:56:58 compute-0 nova_compute[189387]: 2025-11-26 23:56:58.913 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 26 23:56:59 compute-0 podman[203621]: time="2025-11-26T23:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:56:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:56:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4815 "" "Go-http-client/1.1"
Nov 26 23:57:00 compute-0 nova_compute[189387]: 2025-11-26 23:57:00.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:00 compute-0 podman[257944]: 2025-11-26 23:57:00.832429633 +0000 UTC m=+0.111980784 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 26 23:57:01 compute-0 nova_compute[189387]: 2025-11-26 23:57:01.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:01 compute-0 nova_compute[189387]: 2025-11-26 23:57:01.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:01 compute-0 nova_compute[189387]: 2025-11-26 23:57:01.130 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:01 compute-0 openstack_network_exporter[205787]: ERROR   23:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:57:01 compute-0 openstack_network_exporter[205787]: ERROR   23:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:57:01 compute-0 openstack_network_exporter[205787]: ERROR   23:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:57:01 compute-0 openstack_network_exporter[205787]: ERROR   23:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:57:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:57:01 compute-0 openstack_network_exporter[205787]: ERROR   23:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:57:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:57:02 compute-0 nova_compute[189387]: 2025-11-26 23:57:02.130 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:03 compute-0 nova_compute[189387]: 2025-11-26 23:57:03.122 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:03 compute-0 nova_compute[189387]: 2025-11-26 23:57:03.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:06 compute-0 nova_compute[189387]: 2025-11-26 23:57:06.133 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:07 compute-0 nova_compute[189387]: 2025-11-26 23:57:07.135 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:08 compute-0 podman[257963]: 2025-11-26 23:57:08.801950401 +0000 UTC m=+0.092571007 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.openshift.expose-services=, name=ubi9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:57:08 compute-0 podman[257978]: 2025-11-26 23:57:08.825850058 +0000 UTC m=+0.091385397 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:57:08 compute-0 podman[257965]: 2025-11-26 23:57:08.826453043 +0000 UTC m=+0.091932050 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:57:08 compute-0 podman[257964]: 2025-11-26 23:57:08.832574747 +0000 UTC m=+0.116248649 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 26 23:57:08 compute-0 podman[257972]: 2025-11-26 23:57:08.843762485 +0000 UTC m=+0.106184980 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 26 23:57:08 compute-0 podman[257979]: 2025-11-26 23:57:08.876485286 +0000 UTC m=+0.131115054 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, version=9.6, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Nov 26 23:57:09 compute-0 nova_compute[189387]: 2025-11-26 23:57:09.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:09.667 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:09.668 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:09.669 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:11 compute-0 nova_compute[189387]: 2025-11-26 23:57:11.135 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:12 compute-0 nova_compute[189387]: 2025-11-26 23:57:12.138 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:16 compute-0 nova_compute[189387]: 2025-11-26 23:57:16.139 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:17 compute-0 nova_compute[189387]: 2025-11-26 23:57:17.141 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:20 compute-0 podman[258081]: 2025-11-26 23:57:20.845233217 +0000 UTC m=+0.142160148 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:57:21 compute-0 nova_compute[189387]: 2025-11-26 23:57:21.141 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:22 compute-0 nova_compute[189387]: 2025-11-26 23:57:22.145 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:24 compute-0 podman[258100]: 2025-11-26 23:57:24.819655446 +0000 UTC m=+0.108085362 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:57:26 compute-0 nova_compute[189387]: 2025-11-26 23:57:26.144 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:27 compute-0 nova_compute[189387]: 2025-11-26 23:57:27.149 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:29 compute-0 podman[203621]: time="2025-11-26T23:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:57:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29522 "" "Go-http-client/1.1"
Nov 26 23:57:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4810 "" "Go-http-client/1.1"
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.079 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.080 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.081 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.082 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.083 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.086 189391 INFO nova.compute.manager [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Terminating instance#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.088 189391 DEBUG nova.compute.manager [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:57:31 compute-0 kernel: tapa6675240-60 (unregistering): left promiscuous mode
Nov 26 23:57:31 compute-0 NetworkManager[56227]: <info>  [1764201451.1423] device (tapa6675240-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:57:31 compute-0 ovn_controller[97697]: 2025-11-26T23:57:31Z|00239|binding|INFO|Releasing lport a6675240-60ea-47db-9ef6-66080adb5743 from this chassis (sb_readonly=0)
Nov 26 23:57:31 compute-0 ovn_controller[97697]: 2025-11-26T23:57:31Z|00240|binding|INFO|Setting lport a6675240-60ea-47db-9ef6-66080adb5743 down in Southbound
Nov 26 23:57:31 compute-0 ovn_controller[97697]: 2025-11-26T23:57:31Z|00241|binding|INFO|Removing iface tapa6675240-60 ovn-installed in OVS
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.165 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.169 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.184 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d6:2e:64 10.100.2.181'], port_security=['fa:16:3e:d6:2e:64 10.100.2.181'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.181/16', 'neutron:device_id': '0449208f-d12b-40cb-aa71-6f67f687cb6f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76428163-53d4-4bce-87f0-25b9eaf2a465', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '717a3950b66241768222cb5d4ba3291e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75bb422f-e7bb-41bc-a8be-3077d4c0bdb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3d5333e-350e-4d89-bebd-143dbb215949, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=a6675240-60ea-47db-9ef6-66080adb5743) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.186 106595 INFO neutron.agent.ovn.metadata.agent [-] Port a6675240-60ea-47db-9ef6-66080adb5743 in datapath 76428163-53d4-4bce-87f0-25b9eaf2a465 unbound from our chassis#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.188 106595 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76428163-53d4-4bce-87f0-25b9eaf2a465#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.188 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 26 23:57:31 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 7min 21.098s CPU time.
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.217 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[072b87c5-d086-496c-8b81-f6b28ec8dcb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:31 compute-0 systemd-machined[155674]: Machine qemu-15-instance-0000000e terminated.
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.255 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[963836ed-9e66-4743-9fa4-0ba6143a2a7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.259 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[5198d2dc-c7ea-4e22-a06e-44cc349af5e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:31 compute-0 podman[258125]: 2025-11-26 23:57:31.288754755 +0000 UTC m=+0.100415566 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.vendor=CentOS)
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.293 239818 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2acb99-ad6f-448d-b925-e39f2309b6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.313 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[8951470b-ff4a-48ba-a1b3-2fef02ca08ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76428163-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3d:fd:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 8, 'rx_bytes': 1960, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 46], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534613, 'reachable_time': 40375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258157, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.332 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c990f5ae-7c3d-4cae-8f3c-c830d090fa1c]: (4, ({'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap76428163-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534626, 'tstamp': 534626}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258163, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap76428163-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534629, 'tstamp': 534629}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258163, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.334 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76428163-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.337 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.344 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.345 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76428163-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.345 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.346 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76428163-50, col_values=(('external_ids', {'iface-id': '6eddef7b-a60a-473c-89bf-18f9394dad32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:57:31 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:31.346 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.365 189391 INFO nova.virt.libvirt.driver [-] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Instance destroyed successfully.#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.366 189391 DEBUG nova.objects.instance [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lazy-loading 'resources' on Instance uuid 0449208f-d12b-40cb-aa71-6f67f687cb6f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.384 189391 DEBUG nova.virt.libvirt.vif [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:44:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7486994-asg-gqdvh3lloqbk-tbw4korh7qqj-gmgmzkd7t7di',id=14,image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:44:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92e43243-aca7-437e-ae08-bcb42a48e489'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='717a3950b66241768222cb5d4ba3291e',ramdisk_id='',reservation_id='r-bszb5qzy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1561175050',owner_user_name='tempest-PrometheusGabbiTest-1561175050-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:44:30Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5715267a6ec9422aa9b3ef4a2956aa77',uuid=0449208f-d12b-40cb-aa71-6f67f687cb6f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.384 189391 DEBUG nova.network.os_vif_util [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converting VIF {"id": "a6675240-60ea-47db-9ef6-66080adb5743", "address": "fa:16:3e:d6:2e:64", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.181", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6675240-60", "ovs_interfaceid": "a6675240-60ea-47db-9ef6-66080adb5743", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.386 189391 DEBUG nova.network.os_vif_util [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2e:64,bridge_name='br-int',has_traffic_filtering=True,id=a6675240-60ea-47db-9ef6-66080adb5743,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6675240-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.386 189391 DEBUG os_vif [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2e:64,bridge_name='br-int',has_traffic_filtering=True,id=a6675240-60ea-47db-9ef6-66080adb5743,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6675240-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.388 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.389 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6675240-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.392 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.394 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.397 189391 INFO os_vif [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d6:2e:64,bridge_name='br-int',has_traffic_filtering=True,id=a6675240-60ea-47db-9ef6-66080adb5743,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6675240-60')#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.398 189391 INFO nova.virt.libvirt.driver [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Deleting instance files /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f_del#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.400 189391 INFO nova.virt.libvirt.driver [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Deletion of /var/lib/nova/instances/0449208f-d12b-40cb-aa71-6f67f687cb6f_del complete#033[00m
Nov 26 23:57:31 compute-0 openstack_network_exporter[205787]: ERROR   23:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:57:31 compute-0 openstack_network_exporter[205787]: ERROR   23:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:57:31 compute-0 openstack_network_exporter[205787]: ERROR   23:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:57:31 compute-0 openstack_network_exporter[205787]: ERROR   23:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:57:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:57:31 compute-0 openstack_network_exporter[205787]: ERROR   23:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:57:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.477 189391 INFO nova.compute.manager [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.477 189391 DEBUG oslo.service.loopingcall [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.478 189391 DEBUG nova.compute.manager [-] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:57:31 compute-0 nova_compute[189387]: 2025-11-26 23:57:31.479 189391 DEBUG nova.network.neutron [-] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.223 189391 DEBUG nova.compute.manager [req-ce4dafd9-3fba-4a07-8814-755bbae6f0f0 req-089043b4-540b-4f7f-a383-80c3b20f2973 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received event network-vif-unplugged-a6675240-60ea-47db-9ef6-66080adb5743 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.224 189391 DEBUG oslo_concurrency.lockutils [req-ce4dafd9-3fba-4a07-8814-755bbae6f0f0 req-089043b4-540b-4f7f-a383-80c3b20f2973 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.224 189391 DEBUG oslo_concurrency.lockutils [req-ce4dafd9-3fba-4a07-8814-755bbae6f0f0 req-089043b4-540b-4f7f-a383-80c3b20f2973 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.225 189391 DEBUG oslo_concurrency.lockutils [req-ce4dafd9-3fba-4a07-8814-755bbae6f0f0 req-089043b4-540b-4f7f-a383-80c3b20f2973 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.225 189391 DEBUG nova.compute.manager [req-ce4dafd9-3fba-4a07-8814-755bbae6f0f0 req-089043b4-540b-4f7f-a383-80c3b20f2973 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] No waiting events found dispatching network-vif-unplugged-a6675240-60ea-47db-9ef6-66080adb5743 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.226 189391 DEBUG nova.compute.manager [req-ce4dafd9-3fba-4a07-8814-755bbae6f0f0 req-089043b4-540b-4f7f-a383-80c3b20f2973 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received event network-vif-unplugged-a6675240-60ea-47db-9ef6-66080adb5743 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:57:32 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:32.598 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:57:32 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:32.600 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.600 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.630 189391 DEBUG nova.network.neutron [-] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.656 189391 INFO nova.compute.manager [-] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Took 1.18 seconds to deallocate network for instance.#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.718 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.718 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.727 189391 DEBUG nova.compute.manager [req-80e6ea46-2ef3-42d7-89b4-9598ede03d20 req-eb80bda5-5dec-484c-bb31-e6a48052358e f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received event network-vif-deleted-a6675240-60ea-47db-9ef6-66080adb5743 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.809 189391 DEBUG nova.compute.provider_tree [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.823 189391 DEBUG nova.scheduler.client.report [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.851 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.879 189391 INFO nova.scheduler.client.report [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Deleted allocations for instance 0449208f-d12b-40cb-aa71-6f67f687cb6f#033[00m
Nov 26 23:57:32 compute-0 nova_compute[189387]: 2025-11-26 23:57:32.936 189391 DEBUG oslo_concurrency.lockutils [None req-1ea49469-bd5d-402a-84d2-104ec72a5624 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:33 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:33.603 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:57:34 compute-0 nova_compute[189387]: 2025-11-26 23:57:34.332 189391 DEBUG nova.compute.manager [req-5576020c-4a06-430b-b08d-69301ac38cde req-68a5e9b3-417d-494a-b5cc-1e326fff2581 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received event network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:57:34 compute-0 nova_compute[189387]: 2025-11-26 23:57:34.333 189391 DEBUG oslo_concurrency.lockutils [req-5576020c-4a06-430b-b08d-69301ac38cde req-68a5e9b3-417d-494a-b5cc-1e326fff2581 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:34 compute-0 nova_compute[189387]: 2025-11-26 23:57:34.333 189391 DEBUG oslo_concurrency.lockutils [req-5576020c-4a06-430b-b08d-69301ac38cde req-68a5e9b3-417d-494a-b5cc-1e326fff2581 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:34 compute-0 nova_compute[189387]: 2025-11-26 23:57:34.334 189391 DEBUG oslo_concurrency.lockutils [req-5576020c-4a06-430b-b08d-69301ac38cde req-68a5e9b3-417d-494a-b5cc-1e326fff2581 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "0449208f-d12b-40cb-aa71-6f67f687cb6f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:34 compute-0 nova_compute[189387]: 2025-11-26 23:57:34.334 189391 DEBUG nova.compute.manager [req-5576020c-4a06-430b-b08d-69301ac38cde req-68a5e9b3-417d-494a-b5cc-1e326fff2581 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] No waiting events found dispatching network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:57:34 compute-0 nova_compute[189387]: 2025-11-26 23:57:34.335 189391 WARNING nova.compute.manager [req-5576020c-4a06-430b-b08d-69301ac38cde req-68a5e9b3-417d-494a-b5cc-1e326fff2581 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Received unexpected event network-vif-plugged-a6675240-60ea-47db-9ef6-66080adb5743 for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:57:36 compute-0 nova_compute[189387]: 2025-11-26 23:57:36.171 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:36 compute-0 nova_compute[189387]: 2025-11-26 23:57:36.393 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.145 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.145 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.145 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.146 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.146 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.147 189391 INFO nova.compute.manager [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Terminating instance#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.148 189391 DEBUG nova.compute.manager [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 26 23:57:39 compute-0 kernel: tap538c994f-be (unregistering): left promiscuous mode
Nov 26 23:57:39 compute-0 NetworkManager[56227]: <info>  [1764201459.1836] device (tap538c994f-be): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.197 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 ovn_controller[97697]: 2025-11-26T23:57:39Z|00242|binding|INFO|Releasing lport 538c994f-bee1-4965-9065-a8ef17e40bea from this chassis (sb_readonly=0)
Nov 26 23:57:39 compute-0 ovn_controller[97697]: 2025-11-26T23:57:39Z|00243|binding|INFO|Setting lport 538c994f-bee1-4965-9065-a8ef17e40bea down in Southbound
Nov 26 23:57:39 compute-0 ovn_controller[97697]: 2025-11-26T23:57:39Z|00244|binding|INFO|Removing iface tap538c994f-be ovn-installed in OVS
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.206 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.212 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:75:6d 10.100.3.7'], port_security=['fa:16:3e:47:75:6d 10.100.3.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.7/16', 'neutron:device_id': 'b7d5e999-38ca-46e8-b572-cc9fad0fc2cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76428163-53d4-4bce-87f0-25b9eaf2a465', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '717a3950b66241768222cb5d4ba3291e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '75bb422f-e7bb-41bc-a8be-3077d4c0bdb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a3d5333e-350e-4d89-bebd-143dbb215949, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>], logical_port=538c994f-bee1-4965-9065-a8ef17e40bea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f0819fe2670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.214 106595 INFO neutron.agent.ovn.metadata.agent [-] Port 538c994f-bee1-4965-9065-a8ef17e40bea in datapath 76428163-53d4-4bce-87f0-25b9eaf2a465 unbound from our chassis#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.217 106595 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76428163-53d4-4bce-87f0-25b9eaf2a465, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.218 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[e43c970a-a3be-42df-8cef-e0b73d0c4f4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.219 106595 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465 namespace which is not needed anymore#033[00m
Nov 26 23:57:39 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 26 23:57:39 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 6min 50.795s CPU time.
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.251 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 systemd-machined[155674]: Machine qemu-16-instance-0000000f terminated.
Nov 26 23:57:39 compute-0 podman[258181]: 2025-11-26 23:57:39.34871373 +0000 UTC m=+0.130591531 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, name=ubi9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, build-date=2024-09-18T21:23:30)
Nov 26 23:57:39 compute-0 podman[258197]: 2025-11-26 23:57:39.376197672 +0000 UTC m=+0.128893735 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:57:39 compute-0 podman[258184]: 2025-11-26 23:57:39.387359939 +0000 UTC m=+0.144386937 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:57:39 compute-0 podman[258190]: 2025-11-26 23:57:39.393569955 +0000 UTC m=+0.128634328 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 26 23:57:39 compute-0 podman[258183]: 2025-11-26 23:57:39.394854508 +0000 UTC m=+0.166944648 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 26 23:57:39 compute-0 podman[258199]: 2025-11-26 23:57:39.403846069 +0000 UTC m=+0.145065057 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.6, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.420 189391 INFO nova.virt.libvirt.driver [-] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Instance destroyed successfully.#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.421 189391 DEBUG nova.objects.instance [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lazy-loading 'resources' on Instance uuid b7d5e999-38ca-46e8-b572-cc9fad0fc2cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 26 23:57:39 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [NOTICE]   (253179) : haproxy version is 2.8.14-c23fe91
Nov 26 23:57:39 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [NOTICE]   (253179) : path to executable is /usr/sbin/haproxy
Nov 26 23:57:39 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [WARNING]  (253179) : Exiting Master process...
Nov 26 23:57:39 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [WARNING]  (253179) : Exiting Master process...
Nov 26 23:57:39 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [ALERT]    (253179) : Current worker (253181) exited with code 143 (Terminated)
Nov 26 23:57:39 compute-0 neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465[253175]: [WARNING]  (253179) : All workers exited. Exiting... (0)
Nov 26 23:57:39 compute-0 systemd[1]: libpod-8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69.scope: Deactivated successfully.
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.435 189391 DEBUG nova.virt.libvirt.vif [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-26T23:47:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-7486994-asg-gqdvh3lloqbk-w3pew7r5aglv-t7fkcg4jtkgf',id=15,image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-26T23:47:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='92e43243-aca7-437e-ae08-bcb42a48e489'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='717a3950b66241768222cb5d4ba3291e',ramdisk_id='',reservation_id='r-hxdxf1qm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='aa1a3d84-3b07-42eb-bb8c-755851616ed6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1561175050',owner_user_name='tempest-PrometheusGabbiTest-1561175050-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-26T23:47:43Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='5715267a6ec9422aa9b3ef4a2956aa77',uuid=b7d5e999-38ca-46e8-b572-cc9fad0fc2cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.436 189391 DEBUG nova.network.os_vif_util [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converting VIF {"id": "538c994f-bee1-4965-9065-a8ef17e40bea", "address": "fa:16:3e:47:75:6d", "network": {"id": "76428163-53d4-4bce-87f0-25b9eaf2a465", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "717a3950b66241768222cb5d4ba3291e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap538c994f-be", "ovs_interfaceid": "538c994f-bee1-4965-9065-a8ef17e40bea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.436 189391 DEBUG nova.network.os_vif_util [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:75:6d,bridge_name='br-int',has_traffic_filtering=True,id=538c994f-bee1-4965-9065-a8ef17e40bea,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap538c994f-be') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.439 189391 DEBUG os_vif [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:75:6d,bridge_name='br-int',has_traffic_filtering=True,id=538c994f-bee1-4965-9065-a8ef17e40bea,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap538c994f-be') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 26 23:57:39 compute-0 podman[258308]: 2025-11-26 23:57:39.440227458 +0000 UTC m=+0.061318255 container died 8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.440 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.441 189391 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap538c994f-be, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.442 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.445 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.449 189391 INFO os_vif [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:75:6d,bridge_name='br-int',has_traffic_filtering=True,id=538c994f-bee1-4965-9065-a8ef17e40bea,network=Network(76428163-53d4-4bce-87f0-25b9eaf2a465),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap538c994f-be')#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.450 189391 INFO nova.virt.libvirt.driver [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Deleting instance files /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc_del#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.451 189391 INFO nova.virt.libvirt.driver [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Deletion of /var/lib/nova/instances/b7d5e999-38ca-46e8-b572-cc9fad0fc2cc_del complete#033[00m
Nov 26 23:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69-userdata-shm.mount: Deactivated successfully.
Nov 26 23:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a3057320d1b85a4ac9f9c7ea9c4171a3ba99aea6dba66ced1a05a2fafa3d558-merged.mount: Deactivated successfully.
Nov 26 23:57:39 compute-0 podman[258308]: 2025-11-26 23:57:39.487919688 +0000 UTC m=+0.109010485 container cleanup 8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 26 23:57:39 compute-0 systemd[1]: libpod-conmon-8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69.scope: Deactivated successfully.
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.533 189391 INFO nova.compute.manager [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.533 189391 DEBUG oslo.service.loopingcall [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.533 189391 DEBUG nova.compute.manager [-] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.533 189391 DEBUG nova.network.neutron [-] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 26 23:57:39 compute-0 podman[258363]: 2025-11-26 23:57:39.581712347 +0000 UTC m=+0.064960041 container remove 8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.594 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[47ab82f3-da2a-4d97-8855-60eddf51e48e]: (4, ('Wed Nov 26 11:57:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465 (8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69)\n8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69\nWed Nov 26 11:57:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465 (8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69)\n8ec07e663f1a1805c23593fc7ee20e3e7c7e20916e7261e63b4b3510d2ec8f69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.595 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[33423955-8664-4a22-bd4a-62ef0a6e033b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.596 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76428163-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.598 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 kernel: tap76428163-50: left promiscuous mode
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.602 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.605 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[0811352b-eb77-4da5-9339-8b6588e14a90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:39 compute-0 nova_compute[189387]: 2025-11-26 23:57:39.619 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.622 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[c027ebae-c824-486a-9efa-7390114e5c60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.624 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[40cc9c62-daad-4bf9-a9a7-a03cb5a7962b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.640 239757 DEBUG oslo.privsep.daemon [-] privsep: reply[5134ba51-78ca-49b9-9592-b9fa8c5b1c90]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534605, 'reachable_time': 42223, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258375, 'error': None, 'target': 'ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:39 compute-0 systemd[1]: run-netns-ovnmeta\x2d76428163\x2d53d4\x2d4bce\x2d87f0\x2d25b9eaf2a465.mount: Deactivated successfully.
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.644 106708 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76428163-53d4-4bce-87f0-25b9eaf2a465 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 26 23:57:39 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:57:39.645 106708 DEBUG oslo.privsep.daemon [-] privsep: reply[21b52319-825e-4a7e-a8ae-36b9e22ff374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 26 23:57:40 compute-0 nova_compute[189387]: 2025-11-26 23:57:40.159 189391 DEBUG nova.compute.manager [req-a318765e-06c8-4268-ac5b-df3dae4a9237 req-de93a3a1-5470-4fc8-bbd1-55485cb7b993 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received event network-vif-unplugged-538c994f-bee1-4965-9065-a8ef17e40bea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:57:40 compute-0 nova_compute[189387]: 2025-11-26 23:57:40.160 189391 DEBUG oslo_concurrency.lockutils [req-a318765e-06c8-4268-ac5b-df3dae4a9237 req-de93a3a1-5470-4fc8-bbd1-55485cb7b993 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:40 compute-0 nova_compute[189387]: 2025-11-26 23:57:40.160 189391 DEBUG oslo_concurrency.lockutils [req-a318765e-06c8-4268-ac5b-df3dae4a9237 req-de93a3a1-5470-4fc8-bbd1-55485cb7b993 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:40 compute-0 nova_compute[189387]: 2025-11-26 23:57:40.161 189391 DEBUG oslo_concurrency.lockutils [req-a318765e-06c8-4268-ac5b-df3dae4a9237 req-de93a3a1-5470-4fc8-bbd1-55485cb7b993 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:40 compute-0 nova_compute[189387]: 2025-11-26 23:57:40.161 189391 DEBUG nova.compute.manager [req-a318765e-06c8-4268-ac5b-df3dae4a9237 req-de93a3a1-5470-4fc8-bbd1-55485cb7b993 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] No waiting events found dispatching network-vif-unplugged-538c994f-bee1-4965-9065-a8ef17e40bea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:57:40 compute-0 nova_compute[189387]: 2025-11-26 23:57:40.161 189391 DEBUG nova.compute.manager [req-a318765e-06c8-4268-ac5b-df3dae4a9237 req-de93a3a1-5470-4fc8-bbd1-55485cb7b993 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received event network-vif-unplugged-538c994f-bee1-4965-9065-a8ef17e40bea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.132 189391 DEBUG nova.network.neutron [-] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.154 189391 INFO nova.compute.manager [-] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Took 1.62 seconds to deallocate network for instance.#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.173 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.213 189391 DEBUG nova.compute.manager [req-7f4638f5-5a53-4ae6-979a-4c02b065e36b req-5b53e5a0-13de-46ca-a076-19baa60ff384 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received event network-vif-deleted-538c994f-bee1-4965-9065-a8ef17e40bea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.217 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.217 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.308 189391 DEBUG nova.compute.provider_tree [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.325 189391 DEBUG nova.scheduler.client.report [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.366 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.395 189391 INFO nova.scheduler.client.report [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Deleted allocations for instance b7d5e999-38ca-46e8-b572-cc9fad0fc2cc#033[00m
Nov 26 23:57:41 compute-0 nova_compute[189387]: 2025-11-26 23:57:41.476 189391 DEBUG oslo_concurrency.lockutils [None req-d07bd6ac-4e55-42e0-bf03-51e503661449 5715267a6ec9422aa9b3ef4a2956aa77 717a3950b66241768222cb5d4ba3291e - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:42 compute-0 nova_compute[189387]: 2025-11-26 23:57:42.252 189391 DEBUG nova.compute.manager [req-a31b8da5-f7dc-4839-82a3-83a10db75ec5 req-efb2e39e-9e3f-4a4c-85e4-cff6a847c052 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received event network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 26 23:57:42 compute-0 nova_compute[189387]: 2025-11-26 23:57:42.253 189391 DEBUG oslo_concurrency.lockutils [req-a31b8da5-f7dc-4839-82a3-83a10db75ec5 req-efb2e39e-9e3f-4a4c-85e4-cff6a847c052 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Acquiring lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:42 compute-0 nova_compute[189387]: 2025-11-26 23:57:42.253 189391 DEBUG oslo_concurrency.lockutils [req-a31b8da5-f7dc-4839-82a3-83a10db75ec5 req-efb2e39e-9e3f-4a4c-85e4-cff6a847c052 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:42 compute-0 nova_compute[189387]: 2025-11-26 23:57:42.254 189391 DEBUG oslo_concurrency.lockutils [req-a31b8da5-f7dc-4839-82a3-83a10db75ec5 req-efb2e39e-9e3f-4a4c-85e4-cff6a847c052 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] Lock "b7d5e999-38ca-46e8-b572-cc9fad0fc2cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:42 compute-0 nova_compute[189387]: 2025-11-26 23:57:42.254 189391 DEBUG nova.compute.manager [req-a31b8da5-f7dc-4839-82a3-83a10db75ec5 req-efb2e39e-9e3f-4a4c-85e4-cff6a847c052 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] No waiting events found dispatching network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 26 23:57:42 compute-0 nova_compute[189387]: 2025-11-26 23:57:42.254 189391 WARNING nova.compute.manager [req-a31b8da5-f7dc-4839-82a3-83a10db75ec5 req-efb2e39e-9e3f-4a4c-85e4-cff6a847c052 f4b959ae90624156b01d87fb2c891849 5b764f2592524dda9517ebc446e5ed61 - - default default] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Received unexpected event network-vif-plugged-538c994f-bee1-4965-9065-a8ef17e40bea for instance with vm_state deleted and task_state None.#033[00m
Nov 26 23:57:44 compute-0 nova_compute[189387]: 2025-11-26 23:57:44.445 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:46 compute-0 nova_compute[189387]: 2025-11-26 23:57:46.176 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:46 compute-0 nova_compute[189387]: 2025-11-26 23:57:46.362 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764201451.361377, 0449208f-d12b-40cb-aa71-6f67f687cb6f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:57:46 compute-0 nova_compute[189387]: 2025-11-26 23:57:46.363 189391 INFO nova.compute.manager [-] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:57:46 compute-0 nova_compute[189387]: 2025-11-26 23:57:46.384 189391 DEBUG nova.compute.manager [None req-bfd282bf-fde9-40e4-9440-ac50c95ac6d7 - - - - - -] [instance: 0449208f-d12b-40cb-aa71-6f67f687cb6f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:57:49 compute-0 nova_compute[189387]: 2025-11-26 23:57:49.449 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:51 compute-0 nova_compute[189387]: 2025-11-26 23:57:51.179 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:51 compute-0 podman[258378]: 2025-11-26 23:57:51.850301584 +0000 UTC m=+0.137169665 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:57:54 compute-0 nova_compute[189387]: 2025-11-26 23:57:54.417 189391 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764201459.416194, b7d5e999-38ca-46e8-b572-cc9fad0fc2cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 26 23:57:54 compute-0 nova_compute[189387]: 2025-11-26 23:57:54.418 189391 INFO nova.compute.manager [-] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] VM Stopped (Lifecycle Event)#033[00m
Nov 26 23:57:54 compute-0 nova_compute[189387]: 2025-11-26 23:57:54.448 189391 DEBUG nova.compute.manager [None req-83823eb3-8ebc-416b-acc8-be82755faa0b - - - - - -] [instance: b7d5e999-38ca-46e8-b572-cc9fad0fc2cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 26 23:57:54 compute-0 nova_compute[189387]: 2025-11-26 23:57:54.454 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:54 compute-0 nova_compute[189387]: 2025-11-26 23:57:54.470 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:55 compute-0 podman[258397]: 2025-11-26 23:57:55.813382549 +0000 UTC m=+0.096898922 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.169 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.170 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.170 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.170 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.181 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.715 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.717 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5363MB free_disk=72.30624771118164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.717 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.718 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.780 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.780 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.814 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.827 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.847 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:57:56 compute-0 nova_compute[189387]: 2025-11-26 23:57:56.847 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:57:57 compute-0 nova_compute[189387]: 2025-11-26 23:57:57.848 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:57 compute-0 nova_compute[189387]: 2025-11-26 23:57:57.849 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:57:57 compute-0 nova_compute[189387]: 2025-11-26 23:57:57.849 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:57:57 compute-0 nova_compute[189387]: 2025-11-26 23:57:57.866 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:57:57 compute-0 nova_compute[189387]: 2025-11-26 23:57:57.867 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:57:57 compute-0 nova_compute[189387]: 2025-11-26 23:57:57.867 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:57:59 compute-0 nova_compute[189387]: 2025-11-26 23:57:59.458 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:57:59 compute-0 podman[203621]: time="2025-11-26T23:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:57:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:57:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Nov 26 23:58:01 compute-0 nova_compute[189387]: 2025-11-26 23:58:01.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:01 compute-0 nova_compute[189387]: 2025-11-26 23:58:01.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:01 compute-0 nova_compute[189387]: 2025-11-26 23:58:01.184 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:01 compute-0 openstack_network_exporter[205787]: ERROR   23:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:58:01 compute-0 openstack_network_exporter[205787]: ERROR   23:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:58:01 compute-0 openstack_network_exporter[205787]: ERROR   23:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:58:01 compute-0 openstack_network_exporter[205787]: ERROR   23:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:58:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:58:01 compute-0 openstack_network_exporter[205787]: ERROR   23:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:58:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:58:01 compute-0 podman[258420]: 2025-11-26 23:58:01.849551114 +0000 UTC m=+0.137121334 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 26 23:58:02 compute-0 nova_compute[189387]: 2025-11-26 23:58:02.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:04 compute-0 nova_compute[189387]: 2025-11-26 23:58:04.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:04 compute-0 nova_compute[189387]: 2025-11-26 23:58:04.465 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:05 compute-0 nova_compute[189387]: 2025-11-26 23:58:05.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:06 compute-0 nova_compute[189387]: 2025-11-26 23:58:06.187 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:09 compute-0 nova_compute[189387]: 2025-11-26 23:58:09.470 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:58:09.669 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:58:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:58:09.669 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:58:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:58:09.670 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:58:09 compute-0 podman[258442]: 2025-11-26 23:58:09.844384912 +0000 UTC m=+0.134171006 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, name=ubi9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.expose-services=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Nov 26 23:58:09 compute-0 podman[258455]: 2025-11-26 23:58:09.858766125 +0000 UTC m=+0.115457167 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 26 23:58:09 compute-0 podman[258450]: 2025-11-26 23:58:09.859604787 +0000 UTC m=+0.115323954 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:58:09 compute-0 podman[258445]: 2025-11-26 23:58:09.86494827 +0000 UTC m=+0.130817967 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:58:09 compute-0 podman[258444]: 2025-11-26 23:58:09.869149952 +0000 UTC m=+0.142751164 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:58:09 compute-0 podman[258443]: 2025-11-26 23:58:09.882381774 +0000 UTC m=+0.165755557 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 26 23:58:11 compute-0 nova_compute[189387]: 2025-11-26 23:58:11.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:11 compute-0 nova_compute[189387]: 2025-11-26 23:58:11.189 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:14 compute-0 nova_compute[189387]: 2025-11-26 23:58:14.476 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:16 compute-0 nova_compute[189387]: 2025-11-26 23:58:16.194 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:18 compute-0 nova_compute[189387]: 2025-11-26 23:58:18.121 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:19 compute-0 nova_compute[189387]: 2025-11-26 23:58:19.480 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:21 compute-0 nova_compute[189387]: 2025-11-26 23:58:21.198 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:22 compute-0 podman[258561]: 2025-11-26 23:58:22.84943142 +0000 UTC m=+0.133882708 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:58:24 compute-0 nova_compute[189387]: 2025-11-26 23:58:24.485 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:26 compute-0 nova_compute[189387]: 2025-11-26 23:58:26.202 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:26 compute-0 podman[258580]: 2025-11-26 23:58:26.841806715 +0000 UTC m=+0.125952796 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 26 23:58:29 compute-0 nova_compute[189387]: 2025-11-26 23:58:29.489 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:29 compute-0 podman[203621]: time="2025-11-26T23:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:58:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:58:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Nov 26 23:58:31 compute-0 nova_compute[189387]: 2025-11-26 23:58:31.203 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:31 compute-0 ovn_controller[97697]: 2025-11-26T23:58:31Z|00245|memory_trim|INFO|Detected inactivity (last active 30020 ms ago): trimming memory
Nov 26 23:58:31 compute-0 openstack_network_exporter[205787]: ERROR   23:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:58:31 compute-0 openstack_network_exporter[205787]: ERROR   23:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:58:31 compute-0 openstack_network_exporter[205787]: ERROR   23:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:58:31 compute-0 openstack_network_exporter[205787]: ERROR   23:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:58:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:58:31 compute-0 openstack_network_exporter[205787]: ERROR   23:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:58:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:58:32 compute-0 podman[258604]: 2025-11-26 23:58:32.812993859 +0000 UTC m=+0.108935883 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true)
Nov 26 23:58:34 compute-0 nova_compute[189387]: 2025-11-26 23:58:34.493 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:36 compute-0 nova_compute[189387]: 2025-11-26 23:58:36.206 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.854 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.855 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.855 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.864 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.864 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce8269400>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.874 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.874 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.875 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-26 23:58:36.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 26 23:58:39 compute-0 nova_compute[189387]: 2025-11-26 23:58:39.503 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:40 compute-0 podman[258627]: 2025-11-26 23:58:40.814531605 +0000 UTC m=+0.084035110 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 23:58:40 compute-0 podman[258625]: 2025-11-26 23:58:40.836219013 +0000 UTC m=+0.120375278 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 26 23:58:40 compute-0 podman[258633]: 2025-11-26 23:58:40.839747006 +0000 UTC m=+0.106194350 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 26 23:58:40 compute-0 podman[258626]: 2025-11-26 23:58:40.85299766 +0000 UTC m=+0.124310193 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 26 23:58:40 compute-0 podman[258624]: 2025-11-26 23:58:40.85978389 +0000 UTC m=+0.137541115 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, vcs-type=git, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, container_name=kepler)
Nov 26 23:58:40 compute-0 podman[258636]: 2025-11-26 23:58:40.865864282 +0000 UTC m=+0.125065153 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container)
Nov 26 23:58:41 compute-0 nova_compute[189387]: 2025-11-26 23:58:41.209 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:44 compute-0 nova_compute[189387]: 2025-11-26 23:58:44.509 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:46 compute-0 nova_compute[189387]: 2025-11-26 23:58:46.212 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:49 compute-0 nova_compute[189387]: 2025-11-26 23:58:49.513 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:51 compute-0 nova_compute[189387]: 2025-11-26 23:58:51.215 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:53 compute-0 podman[258745]: 2025-11-26 23:58:53.803955858 +0000 UTC m=+0.091919299 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 26 23:58:54 compute-0 nova_compute[189387]: 2025-11-26 23:58:54.519 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.218 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.226 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.227 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.228 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.228 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.780 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.782 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5365MB free_disk=72.29936599731445GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.783 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.784 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.916 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.917 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.942 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.979 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.980 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:58:56 compute-0 nova_compute[189387]: 2025-11-26 23:58:56.981 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:58:57 compute-0 podman[258764]: 2025-11-26 23:58:57.789699897 +0000 UTC m=+0.081375689 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 26 23:58:57 compute-0 nova_compute[189387]: 2025-11-26 23:58:57.980 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:57 compute-0 nova_compute[189387]: 2025-11-26 23:58:57.980 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 26 23:58:57 compute-0 nova_compute[189387]: 2025-11-26 23:58:57.980 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 26 23:58:58 compute-0 nova_compute[189387]: 2025-11-26 23:58:58.105 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 26 23:58:59 compute-0 nova_compute[189387]: 2025-11-26 23:58:59.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:58:59 compute-0 nova_compute[189387]: 2025-11-26 23:58:59.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 26 23:58:59 compute-0 nova_compute[189387]: 2025-11-26 23:58:59.524 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:58:59 compute-0 podman[203621]: time="2025-11-26T23:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:58:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:58:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Nov 26 23:59:01 compute-0 nova_compute[189387]: 2025-11-26 23:59:01.219 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:01 compute-0 openstack_network_exporter[205787]: ERROR   23:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:59:01 compute-0 openstack_network_exporter[205787]: ERROR   23:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:59:01 compute-0 openstack_network_exporter[205787]: ERROR   23:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:59:01 compute-0 openstack_network_exporter[205787]: ERROR   23:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:59:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:59:01 compute-0 openstack_network_exporter[205787]: ERROR   23:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:59:01 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:59:02 compute-0 nova_compute[189387]: 2025-11-26 23:59:02.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:02 compute-0 nova_compute[189387]: 2025-11-26 23:59:02.127 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:03 compute-0 nova_compute[189387]: 2025-11-26 23:59:03.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:03 compute-0 podman[258787]: 2025-11-26 23:59:03.793649714 +0000 UTC m=+0.086697800 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527)
Nov 26 23:59:04 compute-0 nova_compute[189387]: 2025-11-26 23:59:04.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:04 compute-0 nova_compute[189387]: 2025-11-26 23:59:04.529 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:06 compute-0 nova_compute[189387]: 2025-11-26 23:59:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:06 compute-0 nova_compute[189387]: 2025-11-26 23:59:06.222 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:06 compute-0 systemd-logind[819]: New session 32 of user zuul.
Nov 26 23:59:06 compute-0 systemd[1]: Started Session 32 of User zuul.
Nov 26 23:59:09 compute-0 nova_compute[189387]: 2025-11-26 23:59:09.533 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:59:09.670 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:59:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:59:09.671 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:59:09 compute-0 ovn_metadata_agent[106590]: 2025-11-26 23:59:09.672 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:59:11 compute-0 podman[258966]: 2025-11-26 23:59:11.038272744 +0000 UTC m=+0.093972945 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64)
Nov 26 23:59:11 compute-0 podman[258952]: 2025-11-26 23:59:11.055669868 +0000 UTC m=+0.150289556 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 26 23:59:11 compute-0 podman[258958]: 2025-11-26 23:59:11.061673868 +0000 UTC m=+0.132055269 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 26 23:59:11 compute-0 podman[258965]: 2025-11-26 23:59:11.068545011 +0000 UTC m=+0.128214798 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 26 23:59:11 compute-0 podman[258974]: 2025-11-26 23:59:11.068561641 +0000 UTC m=+0.106555590 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 26 23:59:11 compute-0 podman[258951]: 2025-11-26 23:59:11.077061898 +0000 UTC m=+0.177154321 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 26 23:59:11 compute-0 nova_compute[189387]: 2025-11-26 23:59:11.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:11 compute-0 nova_compute[189387]: 2025-11-26 23:59:11.224 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:11 compute-0 ovs-vsctl[259097]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 26 23:59:12 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 258834 (sos)
Nov 26 23:59:12 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 26 23:59:12 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 26 23:59:13 compute-0 virtqemud[188953]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 26 23:59:13 compute-0 virtqemud[188953]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 26 23:59:13 compute-0 virtqemud[188953]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 23:59:14 compute-0 nova_compute[189387]: 2025-11-26 23:59:14.538 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:16 compute-0 nova_compute[189387]: 2025-11-26 23:59:16.225 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:17 compute-0 systemd[1]: Starting Hostname Service...
Nov 26 23:59:17 compute-0 systemd[1]: Started Hostname Service.
Nov 26 23:59:19 compute-0 nova_compute[189387]: 2025-11-26 23:59:19.541 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:21 compute-0 nova_compute[189387]: 2025-11-26 23:59:21.227 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:24 compute-0 nova_compute[189387]: 2025-11-26 23:59:24.546 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:24 compute-0 podman[260433]: 2025-11-26 23:59:24.788458536 +0000 UTC m=+0.082588332 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 26 23:59:26 compute-0 ovs-appctl[260853]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 26 23:59:26 compute-0 ovs-appctl[260859]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 26 23:59:26 compute-0 nova_compute[189387]: 2025-11-26 23:59:26.227 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:26 compute-0 ovs-appctl[260865]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 26 23:59:28 compute-0 podman[261452]: 2025-11-26 23:59:28.352213111 +0000 UTC m=+0.087502042 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 26 23:59:29 compute-0 nova_compute[189387]: 2025-11-26 23:59:29.549 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:29 compute-0 podman[203621]: time="2025-11-26T23:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:59:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:59:29 compute-0 podman[203621]: @ - - [26/Nov/2025:23:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Nov 26 23:59:31 compute-0 nova_compute[189387]: 2025-11-26 23:59:31.229 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:31 compute-0 openstack_network_exporter[205787]: ERROR   23:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 26 23:59:31 compute-0 openstack_network_exporter[205787]: ERROR   23:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:59:31 compute-0 openstack_network_exporter[205787]: ERROR   23:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 26 23:59:31 compute-0 openstack_network_exporter[205787]: ERROR   23:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 26 23:59:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:59:31 compute-0 openstack_network_exporter[205787]: ERROR   23:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 26 23:59:31 compute-0 openstack_network_exporter[205787]: 
Nov 26 23:59:34 compute-0 podman[261884]: 2025-11-26 23:59:34.007838499 +0000 UTC m=+0.148156628 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527)
Nov 26 23:59:34 compute-0 nova_compute[189387]: 2025-11-26 23:59:34.553 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:35 compute-0 virtqemud[188953]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 26 23:59:36 compute-0 nova_compute[189387]: 2025-11-26 23:59:36.232 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:37 compute-0 systemd[1]: Starting Time & Date Service...
Nov 26 23:59:37 compute-0 systemd[1]: Started Time & Date Service.
Nov 26 23:59:39 compute-0 nova_compute[189387]: 2025-11-26 23:59:39.558 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:41 compute-0 nova_compute[189387]: 2025-11-26 23:59:41.236 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:41 compute-0 podman[262340]: 2025-11-26 23:59:41.245450254 +0000 UTC m=+0.133145529 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 26 23:59:41 compute-0 podman[262361]: 2025-11-26 23:59:41.245372041 +0000 UTC m=+0.101475675 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, name=ubi9-minimal, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Nov 26 23:59:41 compute-0 podman[262339]: 2025-11-26 23:59:41.245872814 +0000 UTC m=+0.131089763 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, version=9.4)
Nov 26 23:59:41 compute-0 podman[262341]: 2025-11-26 23:59:41.258370337 +0000 UTC m=+0.135132681 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 26 23:59:41 compute-0 podman[262359]: 2025-11-26 23:59:41.263206086 +0000 UTC m=+0.115904958 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 26 23:59:41 compute-0 podman[262352]: 2025-11-26 23:59:41.286555948 +0000 UTC m=+0.142176729 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 26 23:59:44 compute-0 nova_compute[189387]: 2025-11-26 23:59:44.563 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:46 compute-0 nova_compute[189387]: 2025-11-26 23:59:46.236 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:49 compute-0 nova_compute[189387]: 2025-11-26 23:59:49.566 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:51 compute-0 nova_compute[189387]: 2025-11-26 23:59:51.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:51 compute-0 nova_compute[189387]: 2025-11-26 23:59:51.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 26 23:59:51 compute-0 nova_compute[189387]: 2025-11-26 23:59:51.166 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 26 23:59:51 compute-0 nova_compute[189387]: 2025-11-26 23:59:51.239 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:54 compute-0 nova_compute[189387]: 2025-11-26 23:59:54.570 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:55 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Nov 26 23:59:55 compute-0 systemd[1]: session-32.scope: Consumed 1min 34.007s CPU time, 568.7M memory peak, read 166.7M from disk, written 16.7M to disk.
Nov 26 23:59:55 compute-0 systemd-logind[819]: Session 32 logged out. Waiting for processes to exit.
Nov 26 23:59:55 compute-0 systemd-logind[819]: Removed session 32.
Nov 26 23:59:55 compute-0 systemd-logind[819]: New session 33 of user zuul.
Nov 26 23:59:55 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 26 23:59:56 compute-0 podman[262456]: 2025-11-26 23:59:56.089303182 +0000 UTC m=+0.374966050 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 26 23:59:56 compute-0 nova_compute[189387]: 2025-11-26 23:59:56.242 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:56 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 26 23:59:56 compute-0 systemd-logind[819]: Session 33 logged out. Waiting for processes to exit.
Nov 26 23:59:56 compute-0 systemd-logind[819]: Removed session 33.
Nov 26 23:59:57 compute-0 systemd-logind[819]: New session 34 of user zuul.
Nov 26 23:59:57 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 26 23:59:57 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Nov 26 23:59:57 compute-0 systemd-logind[819]: Session 34 logged out. Waiting for processes to exit.
Nov 26 23:59:57 compute-0 systemd-logind[819]: Removed session 34.
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.165 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.201 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.201 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.202 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.202 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.603 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.604 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5110MB free_disk=72.29872512817383GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.605 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.605 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 26 23:59:58 compute-0 podman[262532]: 2025-11-26 23:59:58.779760971 +0000 UTC m=+0.077044864 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.879 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.879 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 26 23:59:58 compute-0 nova_compute[189387]: 2025-11-26 23:59:58.965 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.044 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.045 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.076 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.102 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.135 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.164 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.167 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.168 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 26 23:59:59 compute-0 nova_compute[189387]: 2025-11-26 23:59:59.575 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 26 23:59:59 compute-0 podman[203621]: time="2025-11-26T23:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 26 23:59:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 26 23:59:59 compute-0 podman[203621]: @ - - [26/Nov/2025:23:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Nov 27 00:00:00 compute-0 nova_compute[189387]: 2025-11-27 00:00:00.128 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:00 compute-0 nova_compute[189387]: 2025-11-27 00:00:00.129 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:00:00 compute-0 nova_compute[189387]: 2025-11-27 00:00:00.129 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:00:00 compute-0 nova_compute[189387]: 2025-11-27 00:00:00.195 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:00:00 compute-0 nova_compute[189387]: 2025-11-27 00:00:00.195 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:00 compute-0 nova_compute[189387]: 2025-11-27 00:00:00.196 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:00:01 compute-0 nova_compute[189387]: 2025-11-27 00:00:01.242 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:01 compute-0 openstack_network_exporter[205787]: ERROR   00:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:00:01 compute-0 openstack_network_exporter[205787]: ERROR   00:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:00:01 compute-0 openstack_network_exporter[205787]: ERROR   00:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:00:01 compute-0 openstack_network_exporter[205787]: ERROR   00:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:00:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:00:01 compute-0 openstack_network_exporter[205787]: ERROR   00:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:00:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:00:02 compute-0 nova_compute[189387]: 2025-11-27 00:00:02.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:04 compute-0 nova_compute[189387]: 2025-11-27 00:00:04.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:04 compute-0 nova_compute[189387]: 2025-11-27 00:00:04.581 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:04 compute-0 systemd[1]: Starting update of the root trust anchor for DNSSEC validation in unbound...
Nov 27 00:00:04 compute-0 systemd[1]: Starting Rotate log files...
Nov 27 00:00:04 compute-0 systemd[1]: unbound-anchor.service: Deactivated successfully.
Nov 27 00:00:04 compute-0 systemd[1]: Finished update of the root trust anchor for DNSSEC validation in unbound.
Nov 27 00:00:04 compute-0 systemd[1]: logrotate.service: Deactivated successfully.
Nov 27 00:00:04 compute-0 systemd[1]: Finished Rotate log files.
Nov 27 00:00:04 compute-0 podman[262557]: 2025-11-27 00:00:04.840693196 +0000 UTC m=+0.121446406 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 27 00:00:05 compute-0 nova_compute[189387]: 2025-11-27 00:00:05.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:06 compute-0 nova_compute[189387]: 2025-11-27 00:00:06.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:06 compute-0 nova_compute[189387]: 2025-11-27 00:00:06.244 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:07 compute-0 nova_compute[189387]: 2025-11-27 00:00:07.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:07 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 27 00:00:07 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 27 00:00:09 compute-0 nova_compute[189387]: 2025-11-27 00:00:09.584 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:00:09.671 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:00:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:00:09.671 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:00:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:00:09.671 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:00:11 compute-0 nova_compute[189387]: 2025-11-27 00:00:11.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:11 compute-0 nova_compute[189387]: 2025-11-27 00:00:11.257 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:11 compute-0 podman[262606]: 2025-11-27 00:00:11.851377035 +0000 UTC m=+0.096494991 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, name=ubi9-minimal, build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Nov 27 00:00:11 compute-0 podman[262586]: 2025-11-27 00:00:11.854801526 +0000 UTC m=+0.115933430 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 27 00:00:11 compute-0 podman[262584]: 2025-11-27 00:00:11.868572743 +0000 UTC m=+0.152388601 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, architecture=x86_64, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 27 00:00:11 compute-0 podman[262593]: 2025-11-27 00:00:11.869690493 +0000 UTC m=+0.136037865 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 00:00:11 compute-0 podman[262585]: 2025-11-27 00:00:11.89622924 +0000 UTC m=+0.162878411 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 00:00:11 compute-0 podman[262594]: 2025-11-27 00:00:11.899442316 +0000 UTC m=+0.157577299 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 27 00:00:13 compute-0 nova_compute[189387]: 2025-11-27 00:00:13.932 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:14 compute-0 nova_compute[189387]: 2025-11-27 00:00:14.589 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:16 compute-0 nova_compute[189387]: 2025-11-27 00:00:16.262 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:19 compute-0 nova_compute[189387]: 2025-11-27 00:00:19.593 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:20 compute-0 nova_compute[189387]: 2025-11-27 00:00:20.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:21 compute-0 nova_compute[189387]: 2025-11-27 00:00:21.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:21 compute-0 nova_compute[189387]: 2025-11-27 00:00:21.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 27 00:00:21 compute-0 nova_compute[189387]: 2025-11-27 00:00:21.264 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:24 compute-0 nova_compute[189387]: 2025-11-27 00:00:24.596 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:25 compute-0 nova_compute[189387]: 2025-11-27 00:00:25.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:26 compute-0 nova_compute[189387]: 2025-11-27 00:00:26.267 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:26 compute-0 podman[262707]: 2025-11-27 00:00:26.803814177 +0000 UTC m=+0.089286060 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 27 00:00:29 compute-0 nova_compute[189387]: 2025-11-27 00:00:29.601 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:29 compute-0 podman[203621]: time="2025-11-27T00:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:00:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:00:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Nov 27 00:00:29 compute-0 podman[262725]: 2025-11-27 00:00:29.794220838 +0000 UTC m=+0.090951304 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:00:31 compute-0 nova_compute[189387]: 2025-11-27 00:00:31.271 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:31 compute-0 openstack_network_exporter[205787]: ERROR   00:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:00:31 compute-0 openstack_network_exporter[205787]: ERROR   00:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:00:31 compute-0 openstack_network_exporter[205787]: ERROR   00:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:00:31 compute-0 openstack_network_exporter[205787]: ERROR   00:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:00:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:00:31 compute-0 openstack_network_exporter[205787]: ERROR   00:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:00:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:00:34 compute-0 nova_compute[189387]: 2025-11-27 00:00:34.606 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:35 compute-0 podman[262748]: 2025-11-27 00:00:35.819973796 +0000 UTC m=+0.109154099 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 27 00:00:36 compute-0 nova_compute[189387]: 2025-11-27 00:00:36.275 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.855 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.856 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:00:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:00:39 compute-0 nova_compute[189387]: 2025-11-27 00:00:39.610 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:41 compute-0 nova_compute[189387]: 2025-11-27 00:00:41.277 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:42 compute-0 podman[262784]: 2025-11-27 00:00:42.826929745 +0000 UTC m=+0.082751466 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41)
Nov 27 00:00:42 compute-0 podman[262771]: 2025-11-27 00:00:42.843293351 +0000 UTC m=+0.110050273 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 27 00:00:42 compute-0 podman[262769]: 2025-11-27 00:00:42.856336038 +0000 UTC m=+0.136576170 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 27 00:00:42 compute-0 podman[262775]: 2025-11-27 00:00:42.862801321 +0000 UTC m=+0.120319817 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 27 00:00:42 compute-0 podman[262783]: 2025-11-27 00:00:42.883944574 +0000 UTC m=+0.129922832 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:00:42 compute-0 podman[262770]: 2025-11-27 00:00:42.886138752 +0000 UTC m=+0.169822085 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:00:44 compute-0 nova_compute[189387]: 2025-11-27 00:00:44.616 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:46 compute-0 nova_compute[189387]: 2025-11-27 00:00:46.280 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:49 compute-0 nova_compute[189387]: 2025-11-27 00:00:49.622 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:51 compute-0 nova_compute[189387]: 2025-11-27 00:00:51.284 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:54 compute-0 nova_compute[189387]: 2025-11-27 00:00:54.627 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:56 compute-0 nova_compute[189387]: 2025-11-27 00:00:56.286 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:57 compute-0 podman[262891]: 2025-11-27 00:00:57.776236593 +0000 UTC m=+0.069922279 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, container_name=multipathd)
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.136 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.183 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.183 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.183 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.183 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.484 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.485 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5298MB free_disk=72.29900360107422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.485 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.485 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.594 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.595 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.632 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.657 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.658 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:00:58 compute-0 nova_compute[189387]: 2025-11-27 00:00:58.659 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:00:59 compute-0 nova_compute[189387]: 2025-11-27 00:00:59.631 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:00:59 compute-0 podman[203621]: time="2025-11-27T00:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:00:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:00:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Nov 27 00:01:00 compute-0 nova_compute[189387]: 2025-11-27 00:01:00.647 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:00 compute-0 nova_compute[189387]: 2025-11-27 00:01:00.647 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:01:00 compute-0 nova_compute[189387]: 2025-11-27 00:01:00.647 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:01:00 compute-0 nova_compute[189387]: 2025-11-27 00:01:00.685 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:01:00 compute-0 nova_compute[189387]: 2025-11-27 00:01:00.686 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:00 compute-0 nova_compute[189387]: 2025-11-27 00:01:00.686 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:01:00 compute-0 podman[262911]: 2025-11-27 00:01:00.79420078 +0000 UTC m=+0.095743865 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 27 00:01:01 compute-0 nova_compute[189387]: 2025-11-27 00:01:01.289 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:01 compute-0 openstack_network_exporter[205787]: ERROR   00:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:01:01 compute-0 openstack_network_exporter[205787]: ERROR   00:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:01:01 compute-0 openstack_network_exporter[205787]: ERROR   00:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:01:01 compute-0 openstack_network_exporter[205787]: ERROR   00:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:01:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:01:01 compute-0 openstack_network_exporter[205787]: ERROR   00:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:01:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:01:03 compute-0 nova_compute[189387]: 2025-11-27 00:01:03.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:04 compute-0 nova_compute[189387]: 2025-11-27 00:01:04.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:04 compute-0 nova_compute[189387]: 2025-11-27 00:01:04.635 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:06 compute-0 nova_compute[189387]: 2025-11-27 00:01:06.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:06 compute-0 nova_compute[189387]: 2025-11-27 00:01:06.292 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:06 compute-0 podman[262949]: 2025-11-27 00:01:06.814070128 +0000 UTC m=+0.113256182 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 27 00:01:07 compute-0 nova_compute[189387]: 2025-11-27 00:01:07.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:09 compute-0 nova_compute[189387]: 2025-11-27 00:01:09.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:09 compute-0 nova_compute[189387]: 2025-11-27 00:01:09.639 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:01:09.672 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:01:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:01:09.674 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:01:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:01:09.674 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:01:11 compute-0 nova_compute[189387]: 2025-11-27 00:01:11.294 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:13 compute-0 nova_compute[189387]: 2025-11-27 00:01:13.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:13 compute-0 podman[262978]: 2025-11-27 00:01:13.810673533 +0000 UTC m=+0.080825299 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:01:13 compute-0 podman[262969]: 2025-11-27 00:01:13.821621945 +0000 UTC m=+0.112770169 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, version=9.4, io.openshift.expose-services=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 27 00:01:13 compute-0 podman[262971]: 2025-11-27 00:01:13.821970904 +0000 UTC m=+0.101700705 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 00:01:13 compute-0 podman[262977]: 2025-11-27 00:01:13.826313489 +0000 UTC m=+0.096708672 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Nov 27 00:01:13 compute-0 podman[262970]: 2025-11-27 00:01:13.842104969 +0000 UTC m=+0.119811436 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 27 00:01:13 compute-0 podman[262980]: 2025-11-27 00:01:13.853245465 +0000 UTC m=+0.100523284 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-type=git, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., release=1755695350)
Nov 27 00:01:14 compute-0 nova_compute[189387]: 2025-11-27 00:01:14.643 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:16 compute-0 nova_compute[189387]: 2025-11-27 00:01:16.296 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:19 compute-0 nova_compute[189387]: 2025-11-27 00:01:19.647 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:21 compute-0 nova_compute[189387]: 2025-11-27 00:01:21.298 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:24 compute-0 nova_compute[189387]: 2025-11-27 00:01:24.651 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:26 compute-0 nova_compute[189387]: 2025-11-27 00:01:26.301 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:28 compute-0 podman[263086]: 2025-11-27 00:01:28.786738351 +0000 UTC m=+0.073466544 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 27 00:01:29 compute-0 nova_compute[189387]: 2025-11-27 00:01:29.655 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:29 compute-0 podman[203621]: time="2025-11-27T00:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:01:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:01:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Nov 27 00:01:31 compute-0 nova_compute[189387]: 2025-11-27 00:01:31.303 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:31 compute-0 openstack_network_exporter[205787]: ERROR   00:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:01:31 compute-0 openstack_network_exporter[205787]: ERROR   00:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:01:31 compute-0 openstack_network_exporter[205787]: ERROR   00:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:01:31 compute-0 openstack_network_exporter[205787]: ERROR   00:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:01:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:01:31 compute-0 openstack_network_exporter[205787]: ERROR   00:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:01:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:01:31 compute-0 podman[263105]: 2025-11-27 00:01:31.820416417 +0000 UTC m=+0.102000802 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:01:34 compute-0 nova_compute[189387]: 2025-11-27 00:01:34.658 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:36 compute-0 nova_compute[189387]: 2025-11-27 00:01:36.306 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:37 compute-0 podman[263126]: 2025-11-27 00:01:37.839967236 +0000 UTC m=+0.121498411 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Nov 27 00:01:39 compute-0 nova_compute[189387]: 2025-11-27 00:01:39.663 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:41 compute-0 nova_compute[189387]: 2025-11-27 00:01:41.307 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:44 compute-0 nova_compute[189387]: 2025-11-27 00:01:44.667 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:44 compute-0 podman[263145]: 2025-11-27 00:01:44.772604821 +0000 UTC m=+0.101776705 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, container_name=kepler, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Nov 27 00:01:44 compute-0 podman[263151]: 2025-11-27 00:01:44.775424826 +0000 UTC m=+0.087948349 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 27 00:01:44 compute-0 podman[263160]: 2025-11-27 00:01:44.776662749 +0000 UTC m=+0.084747443 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, build-date=2025-08-20T13:12:41, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, managed_by=edpm_ansible, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 27 00:01:44 compute-0 podman[263159]: 2025-11-27 00:01:44.79739016 +0000 UTC m=+0.104845028 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 27 00:01:44 compute-0 podman[263147]: 2025-11-27 00:01:44.802943988 +0000 UTC m=+0.122903638 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 27 00:01:44 compute-0 podman[263146]: 2025-11-27 00:01:44.81395011 +0000 UTC m=+0.139544500 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller)
Nov 27 00:01:46 compute-0 nova_compute[189387]: 2025-11-27 00:01:46.309 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:49 compute-0 nova_compute[189387]: 2025-11-27 00:01:49.673 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:51 compute-0 nova_compute[189387]: 2025-11-27 00:01:51.314 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:54 compute-0 nova_compute[189387]: 2025-11-27 00:01:54.676 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:56 compute-0 nova_compute[189387]: 2025-11-27 00:01:56.317 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.163 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.164 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.568 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.569 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5334MB free_disk=72.29900360107422GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.569 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.570 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.633 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.634 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.659 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.672 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.673 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.674 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:01:59 compute-0 nova_compute[189387]: 2025-11-27 00:01:59.681 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:01:59 compute-0 podman[203621]: time="2025-11-27T00:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:01:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:01:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4344 "" "Go-http-client/1.1"
Nov 27 00:01:59 compute-0 podman[263263]: 2025-11-27 00:01:59.829970845 +0000 UTC m=+0.124669644 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd)
Nov 27 00:02:00 compute-0 nova_compute[189387]: 2025-11-27 00:02:00.673 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:00 compute-0 nova_compute[189387]: 2025-11-27 00:02:00.674 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:02:01 compute-0 nova_compute[189387]: 2025-11-27 00:02:01.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:01 compute-0 nova_compute[189387]: 2025-11-27 00:02:01.127 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:02:01 compute-0 nova_compute[189387]: 2025-11-27 00:02:01.128 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:02:01 compute-0 nova_compute[189387]: 2025-11-27 00:02:01.147 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:02:01 compute-0 nova_compute[189387]: 2025-11-27 00:02:01.318 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:01 compute-0 openstack_network_exporter[205787]: ERROR   00:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:02:01 compute-0 openstack_network_exporter[205787]: ERROR   00:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:02:01 compute-0 openstack_network_exporter[205787]: ERROR   00:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:02:01 compute-0 openstack_network_exporter[205787]: ERROR   00:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:02:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:02:01 compute-0 openstack_network_exporter[205787]: ERROR   00:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:02:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:02:02 compute-0 podman[263282]: 2025-11-27 00:02:02.768364719 +0000 UTC m=+0.070168196 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 27 00:02:04 compute-0 nova_compute[189387]: 2025-11-27 00:02:04.684 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:05 compute-0 nova_compute[189387]: 2025-11-27 00:02:05.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:05 compute-0 nova_compute[189387]: 2025-11-27 00:02:05.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:06 compute-0 nova_compute[189387]: 2025-11-27 00:02:06.120 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:06 compute-0 nova_compute[189387]: 2025-11-27 00:02:06.321 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:08 compute-0 podman[263305]: 2025-11-27 00:02:08.838835453 +0000 UTC m=+0.117309679 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527)
Nov 27 00:02:09 compute-0 nova_compute[189387]: 2025-11-27 00:02:09.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:02:09.673 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:02:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:02:09.674 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:02:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:02:09.674 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:02:09 compute-0 nova_compute[189387]: 2025-11-27 00:02:09.688 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:10 compute-0 nova_compute[189387]: 2025-11-27 00:02:10.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:11 compute-0 nova_compute[189387]: 2025-11-27 00:02:11.324 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:14 compute-0 nova_compute[189387]: 2025-11-27 00:02:14.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:14 compute-0 nova_compute[189387]: 2025-11-27 00:02:14.692 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:15 compute-0 podman[263331]: 2025-11-27 00:02:15.814957624 +0000 UTC m=+0.099283610 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 27 00:02:15 compute-0 podman[263324]: 2025-11-27 00:02:15.815328464 +0000 UTC m=+0.113095446 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, release=1214.1726694543, version=9.4, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public)
Nov 27 00:02:15 compute-0 podman[263326]: 2025-11-27 00:02:15.82003363 +0000 UTC m=+0.094984326 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 27 00:02:15 compute-0 podman[263333]: 2025-11-27 00:02:15.820797419 +0000 UTC m=+0.093935307 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 27 00:02:15 compute-0 podman[263334]: 2025-11-27 00:02:15.828070332 +0000 UTC m=+0.100543702 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7)
Nov 27 00:02:15 compute-0 podman[263325]: 2025-11-27 00:02:15.856412326 +0000 UTC m=+0.144469650 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 27 00:02:16 compute-0 nova_compute[189387]: 2025-11-27 00:02:16.326 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:19 compute-0 nova_compute[189387]: 2025-11-27 00:02:19.697 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:21 compute-0 nova_compute[189387]: 2025-11-27 00:02:21.329 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:24 compute-0 nova_compute[189387]: 2025-11-27 00:02:24.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:02:24 compute-0 nova_compute[189387]: 2025-11-27 00:02:24.703 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:26 compute-0 nova_compute[189387]: 2025-11-27 00:02:26.332 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:29 compute-0 nova_compute[189387]: 2025-11-27 00:02:29.706 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:29 compute-0 podman[203621]: time="2025-11-27T00:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:02:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:02:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4352 "" "Go-http-client/1.1"
Nov 27 00:02:30 compute-0 podman[263443]: 2025-11-27 00:02:30.84117937 +0000 UTC m=+0.130289674 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 27 00:02:31 compute-0 nova_compute[189387]: 2025-11-27 00:02:31.340 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:31 compute-0 openstack_network_exporter[205787]: ERROR   00:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:02:31 compute-0 openstack_network_exporter[205787]: ERROR   00:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:02:31 compute-0 openstack_network_exporter[205787]: ERROR   00:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:02:31 compute-0 openstack_network_exporter[205787]: ERROR   00:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:02:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:02:31 compute-0 openstack_network_exporter[205787]: ERROR   00:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:02:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:02:33 compute-0 podman[263464]: 2025-11-27 00:02:33.791671354 +0000 UTC m=+0.085260007 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:02:34 compute-0 nova_compute[189387]: 2025-11-27 00:02:34.710 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:36 compute-0 nova_compute[189387]: 2025-11-27 00:02:36.340 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.856 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.856 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.856 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.857 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.859 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.878 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:02:36.879 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:02:39 compute-0 nova_compute[189387]: 2025-11-27 00:02:39.715 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:39 compute-0 podman[263488]: 2025-11-27 00:02:39.771437407 +0000 UTC m=+0.073194887 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 27 00:02:41 compute-0 nova_compute[189387]: 2025-11-27 00:02:41.342 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:44 compute-0 nova_compute[189387]: 2025-11-27 00:02:44.718 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:46 compute-0 nova_compute[189387]: 2025-11-27 00:02:46.344 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:46 compute-0 podman[263505]: 2025-11-27 00:02:46.801508863 +0000 UTC m=+0.089586092 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, release=1214.1726694543, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, container_name=kepler, io.buildah.version=1.29.0)
Nov 27 00:02:46 compute-0 podman[263507]: 2025-11-27 00:02:46.803171077 +0000 UTC m=+0.084499806 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 00:02:46 compute-0 podman[263519]: 2025-11-27 00:02:46.820294472 +0000 UTC m=+0.087900866 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS)
Nov 27 00:02:46 compute-0 podman[263525]: 2025-11-27 00:02:46.830118654 +0000 UTC m=+0.092515571 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, version=9.6, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-type=git, config_id=edpm)
Nov 27 00:02:46 compute-0 podman[263512]: 2025-11-27 00:02:46.84240093 +0000 UTC m=+0.111609977 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 27 00:02:46 compute-0 podman[263506]: 2025-11-27 00:02:46.848161983 +0000 UTC m=+0.135036480 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 27 00:02:49 compute-0 nova_compute[189387]: 2025-11-27 00:02:49.723 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:51 compute-0 nova_compute[189387]: 2025-11-27 00:02:51.347 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:54 compute-0 nova_compute[189387]: 2025-11-27 00:02:54.725 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:56 compute-0 nova_compute[189387]: 2025-11-27 00:02:56.349 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:59 compute-0 nova_compute[189387]: 2025-11-27 00:02:59.730 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:02:59 compute-0 podman[203621]: time="2025-11-27T00:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:02:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:02:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.159 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.160 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.161 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.161 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.524 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.525 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5307MB free_disk=72.29909896850586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.526 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.526 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.586 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.586 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.610 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.629 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.630 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:03:00 compute-0 nova_compute[189387]: 2025-11-27 00:03:00.631 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:03:01 compute-0 nova_compute[189387]: 2025-11-27 00:03:01.352 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:01 compute-0 openstack_network_exporter[205787]: ERROR   00:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:03:01 compute-0 openstack_network_exporter[205787]: ERROR   00:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:03:01 compute-0 openstack_network_exporter[205787]: ERROR   00:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:03:01 compute-0 openstack_network_exporter[205787]: ERROR   00:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:03:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:03:01 compute-0 openstack_network_exporter[205787]: ERROR   00:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:03:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:03:01 compute-0 nova_compute[189387]: 2025-11-27 00:03:01.631 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:01 compute-0 nova_compute[189387]: 2025-11-27 00:03:01.631 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:03:01 compute-0 nova_compute[189387]: 2025-11-27 00:03:01.631 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:03:01 compute-0 nova_compute[189387]: 2025-11-27 00:03:01.652 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:03:01 compute-0 nova_compute[189387]: 2025-11-27 00:03:01.652 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:01 compute-0 nova_compute[189387]: 2025-11-27 00:03:01.653 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:03:01 compute-0 podman[263621]: 2025-11-27 00:03:01.842425276 +0000 UTC m=+0.124158211 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 27 00:03:04 compute-0 nova_compute[189387]: 2025-11-27 00:03:04.734 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:04 compute-0 podman[263640]: 2025-11-27 00:03:04.79842547 +0000 UTC m=+0.096919458 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:03:06 compute-0 nova_compute[189387]: 2025-11-27 00:03:06.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:06 compute-0 nova_compute[189387]: 2025-11-27 00:03:06.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:06 compute-0 nova_compute[189387]: 2025-11-27 00:03:06.355 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:07 compute-0 nova_compute[189387]: 2025-11-27 00:03:07.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:03:09.675 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:03:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:03:09.676 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:03:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:03:09.676 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:03:09 compute-0 nova_compute[189387]: 2025-11-27 00:03:09.740 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:10 compute-0 nova_compute[189387]: 2025-11-27 00:03:10.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:10 compute-0 podman[263661]: 2025-11-27 00:03:10.836418858 +0000 UTC m=+0.127143011 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=fcb38123433469bfaad5a5f425f59527)
Nov 27 00:03:11 compute-0 nova_compute[189387]: 2025-11-27 00:03:11.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:11 compute-0 nova_compute[189387]: 2025-11-27 00:03:11.359 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:14 compute-0 nova_compute[189387]: 2025-11-27 00:03:14.745 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:15 compute-0 nova_compute[189387]: 2025-11-27 00:03:15.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:03:16 compute-0 nova_compute[189387]: 2025-11-27 00:03:16.362 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:17 compute-0 podman[263684]: 2025-11-27 00:03:17.800462337 +0000 UTC m=+0.082966166 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 27 00:03:17 compute-0 podman[263697]: 2025-11-27 00:03:17.807968036 +0000 UTC m=+0.083670904 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, container_name=openstack_network_exporter, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6)
Nov 27 00:03:17 compute-0 podman[263681]: 2025-11-27 00:03:17.808984423 +0000 UTC m=+0.105802274 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-type=git, release-0.7.12=, distribution-scope=public, version=9.4, release=1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30)
Nov 27 00:03:17 compute-0 podman[263683]: 2025-11-27 00:03:17.819947635 +0000 UTC m=+0.109048560 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 27 00:03:17 compute-0 podman[263691]: 2025-11-27 00:03:17.812552369 +0000 UTC m=+0.093900978 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 27 00:03:17 compute-0 podman[263682]: 2025-11-27 00:03:17.843776378 +0000 UTC m=+0.137367422 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:03:19 compute-0 nova_compute[189387]: 2025-11-27 00:03:19.749 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:21 compute-0 nova_compute[189387]: 2025-11-27 00:03:21.366 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:24 compute-0 nova_compute[189387]: 2025-11-27 00:03:24.754 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:26 compute-0 nova_compute[189387]: 2025-11-27 00:03:26.367 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:29 compute-0 podman[203621]: time="2025-11-27T00:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:03:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:03:29 compute-0 nova_compute[189387]: 2025-11-27 00:03:29.759 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Nov 27 00:03:31 compute-0 nova_compute[189387]: 2025-11-27 00:03:31.371 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:31 compute-0 openstack_network_exporter[205787]: ERROR   00:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:03:31 compute-0 openstack_network_exporter[205787]: ERROR   00:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:03:31 compute-0 openstack_network_exporter[205787]: ERROR   00:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:03:31 compute-0 openstack_network_exporter[205787]: ERROR   00:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:03:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:03:31 compute-0 openstack_network_exporter[205787]: ERROR   00:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:03:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:03:32 compute-0 podman[263800]: 2025-11-27 00:03:32.814621245 +0000 UTC m=+0.093797914 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 27 00:03:34 compute-0 nova_compute[189387]: 2025-11-27 00:03:34.762 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:35 compute-0 podman[263819]: 2025-11-27 00:03:35.771730815 +0000 UTC m=+0.063596440 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 27 00:03:36 compute-0 nova_compute[189387]: 2025-11-27 00:03:36.373 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:39 compute-0 nova_compute[189387]: 2025-11-27 00:03:39.765 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:41 compute-0 nova_compute[189387]: 2025-11-27 00:03:41.375 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:41 compute-0 podman[263843]: 2025-11-27 00:03:41.793912345 +0000 UTC m=+0.092608913 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 27 00:03:44 compute-0 nova_compute[189387]: 2025-11-27 00:03:44.769 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:46 compute-0 nova_compute[189387]: 2025-11-27 00:03:46.377 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:48 compute-0 podman[263865]: 2025-11-27 00:03:48.786982667 +0000 UTC m=+0.069803586 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 27 00:03:48 compute-0 podman[263866]: 2025-11-27 00:03:48.809227498 +0000 UTC m=+0.079435452 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 27 00:03:48 compute-0 podman[263867]: 2025-11-27 00:03:48.817467847 +0000 UTC m=+0.091851232 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 27 00:03:48 compute-0 podman[263863]: 2025-11-27 00:03:48.83077298 +0000 UTC m=+0.116917647 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.expose-services=, release-0.7.12=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, container_name=kepler, vcs-type=git, config_id=edpm, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 27 00:03:48 compute-0 podman[263882]: 2025-11-27 00:03:48.838688451 +0000 UTC m=+0.109456030 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, managed_by=edpm_ansible, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 27 00:03:48 compute-0 podman[263864]: 2025-11-27 00:03:48.839480862 +0000 UTC m=+0.125975729 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 27 00:03:49 compute-0 nova_compute[189387]: 2025-11-27 00:03:49.773 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:51 compute-0 nova_compute[189387]: 2025-11-27 00:03:51.381 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:54 compute-0 nova_compute[189387]: 2025-11-27 00:03:54.776 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:56 compute-0 nova_compute[189387]: 2025-11-27 00:03:56.383 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:03:59 compute-0 podman[203621]: time="2025-11-27T00:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:03:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:03:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Nov 27 00:03:59 compute-0 nova_compute[189387]: 2025-11-27 00:03:59.779 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.159 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.160 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.160 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.161 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.387 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:01 compute-0 openstack_network_exporter[205787]: ERROR   00:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:04:01 compute-0 openstack_network_exporter[205787]: ERROR   00:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:04:01 compute-0 openstack_network_exporter[205787]: ERROR   00:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:04:01 compute-0 openstack_network_exporter[205787]: ERROR   00:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:04:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:04:01 compute-0 openstack_network_exporter[205787]: ERROR   00:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:04:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.613 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.615 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5316MB free_disk=72.29909896850586GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.615 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.616 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.678 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.679 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.701 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.714 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.715 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:04:01 compute-0 nova_compute[189387]: 2025-11-27 00:04:01.715 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:04:02 compute-0 nova_compute[189387]: 2025-11-27 00:04:02.716 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:02 compute-0 nova_compute[189387]: 2025-11-27 00:04:02.717 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:04:03 compute-0 nova_compute[189387]: 2025-11-27 00:04:03.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:03 compute-0 nova_compute[189387]: 2025-11-27 00:04:03.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:04:03 compute-0 nova_compute[189387]: 2025-11-27 00:04:03.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:04:03 compute-0 nova_compute[189387]: 2025-11-27 00:04:03.148 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:04:03 compute-0 podman[263985]: 2025-11-27 00:04:03.825664086 +0000 UTC m=+0.115009428 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:04:04 compute-0 nova_compute[189387]: 2025-11-27 00:04:04.786 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:06 compute-0 nova_compute[189387]: 2025-11-27 00:04:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:06 compute-0 nova_compute[189387]: 2025-11-27 00:04:06.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:06 compute-0 nova_compute[189387]: 2025-11-27 00:04:06.391 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:06 compute-0 podman[264007]: 2025-11-27 00:04:06.757869853 +0000 UTC m=+0.061106185 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 27 00:04:07 compute-0 nova_compute[189387]: 2025-11-27 00:04:07.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:04:09.676 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:04:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:04:09.677 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:04:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:04:09.677 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:04:09 compute-0 nova_compute[189387]: 2025-11-27 00:04:09.790 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:11 compute-0 nova_compute[189387]: 2025-11-27 00:04:11.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:11 compute-0 nova_compute[189387]: 2025-11-27 00:04:11.393 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:12 compute-0 nova_compute[189387]: 2025-11-27 00:04:12.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:12 compute-0 podman[264032]: 2025-11-27 00:04:12.809406223 +0000 UTC m=+0.106356648 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4)
Nov 27 00:04:14 compute-0 nova_compute[189387]: 2025-11-27 00:04:14.794 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:16 compute-0 nova_compute[189387]: 2025-11-27 00:04:16.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:16 compute-0 nova_compute[189387]: 2025-11-27 00:04:16.215 189391 DEBUG oslo_concurrency.processutils [None req-29a1029f-267b-4471-b43f-cc76dd69033f 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 27 00:04:16 compute-0 nova_compute[189387]: 2025-11-27 00:04:16.246 189391 DEBUG oslo_concurrency.processutils [None req-29a1029f-267b-4471-b43f-cc76dd69033f 6ad061874c77438db2e6d8efb2b1400b dd2e793599b6418881c391df7f71e0c6 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 27 00:04:16 compute-0 nova_compute[189387]: 2025-11-27 00:04:16.396 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:19 compute-0 nova_compute[189387]: 2025-11-27 00:04:19.799 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:19 compute-0 podman[264054]: 2025-11-27 00:04:19.802439283 +0000 UTC m=+0.091663487 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 27 00:04:19 compute-0 podman[264055]: 2025-11-27 00:04:19.81244009 +0000 UTC m=+0.097741809 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 27 00:04:19 compute-0 podman[264052]: 2025-11-27 00:04:19.816465827 +0000 UTC m=+0.110669163 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 27 00:04:19 compute-0 podman[264068]: 2025-11-27 00:04:19.836233462 +0000 UTC m=+0.111268739 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, version=9.6)
Nov 27 00:04:19 compute-0 podman[264060]: 2025-11-27 00:04:19.841731738 +0000 UTC m=+0.107373485 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Nov 27 00:04:19 compute-0 podman[264053]: 2025-11-27 00:04:19.842764415 +0000 UTC m=+0.131379523 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 27 00:04:21 compute-0 nova_compute[189387]: 2025-11-27 00:04:21.397 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:23 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:04:23.534 106595 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ea:74:94', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '16:17:d1:48:8c:c3'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 27 00:04:23 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:04:23.535 106595 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 27 00:04:23 compute-0 nova_compute[189387]: 2025-11-27 00:04:23.535 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:24 compute-0 nova_compute[189387]: 2025-11-27 00:04:24.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:24 compute-0 nova_compute[189387]: 2025-11-27 00:04:24.802 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:26 compute-0 nova_compute[189387]: 2025-11-27 00:04:26.401 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:29 compute-0 podman[203621]: time="2025-11-27T00:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:04:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:04:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Nov 27 00:04:29 compute-0 nova_compute[189387]: 2025-11-27 00:04:29.806 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:31 compute-0 nova_compute[189387]: 2025-11-27 00:04:31.403 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:31 compute-0 openstack_network_exporter[205787]: ERROR   00:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:04:31 compute-0 openstack_network_exporter[205787]: ERROR   00:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:04:31 compute-0 openstack_network_exporter[205787]: ERROR   00:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:04:31 compute-0 openstack_network_exporter[205787]: ERROR   00:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:04:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:04:31 compute-0 openstack_network_exporter[205787]: ERROR   00:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:04:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:04:33 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:04:33.538 106595 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=bbd59242-3683-4df7-8a2a-12b2eb702783, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 27 00:04:34 compute-0 nova_compute[189387]: 2025-11-27 00:04:34.810 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:34 compute-0 podman[264177]: 2025-11-27 00:04:34.813393183 +0000 UTC m=+0.107085247 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd)
Nov 27 00:04:36 compute-0 nova_compute[189387]: 2025-11-27 00:04:36.406 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.857 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.858 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.860 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.864 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.869 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.871 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.873 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.873 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.874 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.875 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.876 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:04:36.877 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:04:37 compute-0 podman[264197]: 2025-11-27 00:04:37.821975333 +0000 UTC m=+0.111916626 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 27 00:04:39 compute-0 nova_compute[189387]: 2025-11-27 00:04:39.815 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:41 compute-0 nova_compute[189387]: 2025-11-27 00:04:41.408 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:43 compute-0 podman[264221]: 2025-11-27 00:04:43.784243001 +0000 UTC m=+0.079886124 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 27 00:04:44 compute-0 nova_compute[189387]: 2025-11-27 00:04:44.819 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:46 compute-0 nova_compute[189387]: 2025-11-27 00:04:46.410 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:49 compute-0 nova_compute[189387]: 2025-11-27 00:04:49.823 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:50 compute-0 podman[264243]: 2025-11-27 00:04:50.803238572 +0000 UTC m=+0.085926235 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 27 00:04:50 compute-0 podman[264241]: 2025-11-27 00:04:50.825112393 +0000 UTC m=+0.108127155 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, distribution-scope=public, vcs-type=git, version=9.4, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 27 00:04:50 compute-0 podman[264244]: 2025-11-27 00:04:50.825124743 +0000 UTC m=+0.095226492 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 27 00:04:50 compute-0 podman[264246]: 2025-11-27 00:04:50.825636217 +0000 UTC m=+0.089823169 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41)
Nov 27 00:04:50 compute-0 podman[264245]: 2025-11-27 00:04:50.831473082 +0000 UTC m=+0.108188906 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 27 00:04:50 compute-0 podman[264242]: 2025-11-27 00:04:50.844530109 +0000 UTC m=+0.131606719 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 27 00:04:51 compute-0 nova_compute[189387]: 2025-11-27 00:04:51.414 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:54 compute-0 nova_compute[189387]: 2025-11-27 00:04:54.827 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:56 compute-0 nova_compute[189387]: 2025-11-27 00:04:56.415 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:04:59 compute-0 nova_compute[189387]: 2025-11-27 00:04:59.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:04:59 compute-0 nova_compute[189387]: 2025-11-27 00:04:59.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 27 00:04:59 compute-0 nova_compute[189387]: 2025-11-27 00:04:59.144 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 27 00:04:59 compute-0 podman[203621]: time="2025-11-27T00:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:04:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:04:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4347 "" "Go-http-client/1.1"
Nov 27 00:04:59 compute-0 nova_compute[189387]: 2025-11-27 00:04:59.831 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:01 compute-0 openstack_network_exporter[205787]: ERROR   00:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:05:01 compute-0 openstack_network_exporter[205787]: ERROR   00:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:05:01 compute-0 openstack_network_exporter[205787]: ERROR   00:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:05:01 compute-0 openstack_network_exporter[205787]: ERROR   00:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:05:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:05:01 compute-0 openstack_network_exporter[205787]: ERROR   00:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:05:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:05:01 compute-0 nova_compute[189387]: 2025-11-27 00:05:01.420 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.144 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.144 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.145 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.186 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.187 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.187 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.187 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.497 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.498 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5338MB free_disk=72.29911804199219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.499 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.499 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.658 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.659 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.737 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing inventories for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.843 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating ProviderTree inventory for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.844 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Updating inventory in ProviderTree for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.888 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing aggregate associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.909 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Refreshing trait associations for resource provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78, traits: COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE41,HW_CPU_X86_AMD_SVM,HW_CPU_X86_MMX,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_FDC,COMPUTE_RESCUE_BFV,HW_CPU_X86_BMI,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX,COMPUTE_NODE,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_SSE4A,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_F16C,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,HW_CPU_X86_SSE,HW_CPU_X86_AESNI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_USB,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_SECURITY_TPM_2_0,COMPUTE_ACCELERATORS,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_BMI2,HW_CPU_X86_FMA3,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_SECURITY_TPM_1_2,COMPUTE_IMAGE_TYPE_RAW _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.935 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.949 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.951 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:05:03 compute-0 nova_compute[189387]: 2025-11-27 00:05:03.951 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.452s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:05:04 compute-0 nova_compute[189387]: 2025-11-27 00:05:04.836 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:04 compute-0 nova_compute[189387]: 2025-11-27 00:05:04.932 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:04 compute-0 nova_compute[189387]: 2025-11-27 00:05:04.933 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:05:04 compute-0 nova_compute[189387]: 2025-11-27 00:05:04.934 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:05:04 compute-0 nova_compute[189387]: 2025-11-27 00:05:04.961 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:05:05 compute-0 podman[264361]: 2025-11-27 00:05:05.861758186 +0000 UTC m=+0.147211635 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:05:06 compute-0 nova_compute[189387]: 2025-11-27 00:05:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:06 compute-0 nova_compute[189387]: 2025-11-27 00:05:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:06 compute-0 nova_compute[189387]: 2025-11-27 00:05:06.422 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:08 compute-0 nova_compute[189387]: 2025-11-27 00:05:08.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:08 compute-0 podman[264381]: 2025-11-27 00:05:08.770369337 +0000 UTC m=+0.072264882 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:05:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:05:09.678 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:05:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:05:09.678 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:05:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:05:09.678 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:05:09 compute-0 nova_compute[189387]: 2025-11-27 00:05:09.839 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:11 compute-0 nova_compute[189387]: 2025-11-27 00:05:11.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:11 compute-0 nova_compute[189387]: 2025-11-27 00:05:11.423 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:14 compute-0 nova_compute[189387]: 2025-11-27 00:05:14.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:14 compute-0 podman[264405]: 2025-11-27 00:05:14.82918067 +0000 UTC m=+0.126901564 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:05:14 compute-0 nova_compute[189387]: 2025-11-27 00:05:14.842 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:16 compute-0 nova_compute[189387]: 2025-11-27 00:05:16.428 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:18 compute-0 nova_compute[189387]: 2025-11-27 00:05:18.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:19 compute-0 nova_compute[189387]: 2025-11-27 00:05:19.846 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:21 compute-0 nova_compute[189387]: 2025-11-27 00:05:21.428 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:21 compute-0 podman[264424]: 2025-11-27 00:05:21.800513615 +0000 UTC m=+0.096041773 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.openshift.tags=base rhel9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4)
Nov 27 00:05:21 compute-0 podman[264426]: 2025-11-27 00:05:21.814843916 +0000 UTC m=+0.091959135 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 27 00:05:21 compute-0 podman[264439]: 2025-11-27 00:05:21.824729069 +0000 UTC m=+0.098708675 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, release=1755695350)
Nov 27 00:05:21 compute-0 podman[264427]: 2025-11-27 00:05:21.827692978 +0000 UTC m=+0.112119051 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 27 00:05:21 compute-0 podman[264429]: 2025-11-27 00:05:21.834885418 +0000 UTC m=+0.113991741 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:05:21 compute-0 podman[264425]: 2025-11-27 00:05:21.84736119 +0000 UTC m=+0.142415166 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 27 00:05:24 compute-0 nova_compute[189387]: 2025-11-27 00:05:24.849 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:26 compute-0 nova_compute[189387]: 2025-11-27 00:05:26.431 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:28 compute-0 nova_compute[189387]: 2025-11-27 00:05:28.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:28 compute-0 nova_compute[189387]: 2025-11-27 00:05:28.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 27 00:05:29 compute-0 podman[203621]: time="2025-11-27T00:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:05:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:05:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Nov 27 00:05:29 compute-0 nova_compute[189387]: 2025-11-27 00:05:29.853 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:31 compute-0 openstack_network_exporter[205787]: ERROR   00:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:05:31 compute-0 openstack_network_exporter[205787]: ERROR   00:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:05:31 compute-0 openstack_network_exporter[205787]: ERROR   00:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:05:31 compute-0 openstack_network_exporter[205787]: ERROR   00:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:05:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:05:31 compute-0 openstack_network_exporter[205787]: ERROR   00:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:05:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:05:31 compute-0 nova_compute[189387]: 2025-11-27 00:05:31.435 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:34 compute-0 nova_compute[189387]: 2025-11-27 00:05:34.858 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:36 compute-0 nova_compute[189387]: 2025-11-27 00:05:36.940 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:37 compute-0 podman[264540]: 2025-11-27 00:05:37.097149037 +0000 UTC m=+0.117978507 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 27 00:05:37 compute-0 nova_compute[189387]: 2025-11-27 00:05:37.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:05:39 compute-0 podman[264559]: 2025-11-27 00:05:39.799980708 +0000 UTC m=+0.089338426 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 27 00:05:39 compute-0 nova_compute[189387]: 2025-11-27 00:05:39.863 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:41 compute-0 nova_compute[189387]: 2025-11-27 00:05:41.944 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:44 compute-0 nova_compute[189387]: 2025-11-27 00:05:44.866 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:45 compute-0 podman[264583]: 2025-11-27 00:05:45.782797483 +0000 UTC m=+0.077900962 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm)
Nov 27 00:05:46 compute-0 nova_compute[189387]: 2025-11-27 00:05:46.946 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:49 compute-0 nova_compute[189387]: 2025-11-27 00:05:49.870 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:51 compute-0 nova_compute[189387]: 2025-11-27 00:05:51.950 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:52 compute-0 podman[264605]: 2025-11-27 00:05:52.824378204 +0000 UTC m=+0.098141190 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 27 00:05:52 compute-0 podman[264606]: 2025-11-27 00:05:52.828476843 +0000 UTC m=+0.106596024 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 27 00:05:52 compute-0 podman[264608]: 2025-11-27 00:05:52.832672275 +0000 UTC m=+0.093506967 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, distribution-scope=public, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, release=1755695350)
Nov 27 00:05:52 compute-0 podman[264603]: 2025-11-27 00:05:52.838819678 +0000 UTC m=+0.117437933 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, io.buildah.version=1.29.0, distribution-scope=public, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, version=9.4, vendor=Red Hat, Inc., release-0.7.12=, config_id=edpm)
Nov 27 00:05:52 compute-0 podman[264607]: 2025-11-27 00:05:52.856702093 +0000 UTC m=+0.130679594 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:05:52 compute-0 podman[264604]: 2025-11-27 00:05:52.890737877 +0000 UTC m=+0.163709002 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 27 00:05:54 compute-0 nova_compute[189387]: 2025-11-27 00:05:54.875 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:56 compute-0 nova_compute[189387]: 2025-11-27 00:05:56.951 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:05:59 compute-0 podman[203621]: time="2025-11-27T00:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:05:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:05:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4351 "" "Go-http-client/1.1"
Nov 27 00:05:59 compute-0 nova_compute[189387]: 2025-11-27 00:05:59.878 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:01 compute-0 openstack_network_exporter[205787]: ERROR   00:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:06:01 compute-0 openstack_network_exporter[205787]: ERROR   00:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:06:01 compute-0 openstack_network_exporter[205787]: ERROR   00:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:06:01 compute-0 openstack_network_exporter[205787]: ERROR   00:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:06:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:06:01 compute-0 openstack_network_exporter[205787]: ERROR   00:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:06:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:06:01 compute-0 nova_compute[189387]: 2025-11-27 00:06:01.953 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.150 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.150 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.150 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.183 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.184 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.184 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.184 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.507 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.508 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5333MB free_disk=72.29913711547852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.508 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.508 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.560 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.561 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.583 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.600 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.601 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.602 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:06:04 compute-0 nova_compute[189387]: 2025-11-27 00:06:04.882 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:05 compute-0 nova_compute[189387]: 2025-11-27 00:06:05.576 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:05 compute-0 nova_compute[189387]: 2025-11-27 00:06:05.576 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:06:05 compute-0 nova_compute[189387]: 2025-11-27 00:06:05.577 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:06:05 compute-0 nova_compute[189387]: 2025-11-27 00:06:05.595 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:06:06 compute-0 nova_compute[189387]: 2025-11-27 00:06:06.138 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:06 compute-0 nova_compute[189387]: 2025-11-27 00:06:06.955 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:07 compute-0 nova_compute[189387]: 2025-11-27 00:06:07.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:07 compute-0 podman[264717]: 2025-11-27 00:06:07.771434176 +0000 UTC m=+0.072279512 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd)
Nov 27 00:06:08 compute-0 nova_compute[189387]: 2025-11-27 00:06:08.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:06:09.679 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:06:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:06:09.680 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:06:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:06:09.680 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:06:09 compute-0 nova_compute[189387]: 2025-11-27 00:06:09.885 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:10 compute-0 podman[264737]: 2025-11-27 00:06:10.761208175 +0000 UTC m=+0.059144264 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:06:11 compute-0 nova_compute[189387]: 2025-11-27 00:06:11.956 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:12 compute-0 nova_compute[189387]: 2025-11-27 00:06:12.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:14 compute-0 nova_compute[189387]: 2025-11-27 00:06:14.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:14 compute-0 nova_compute[189387]: 2025-11-27 00:06:14.890 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:16 compute-0 podman[264761]: 2025-11-27 00:06:16.796949313 +0000 UTC m=+0.097134202 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Nov 27 00:06:16 compute-0 nova_compute[189387]: 2025-11-27 00:06:16.958 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:18 compute-0 nova_compute[189387]: 2025-11-27 00:06:18.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:19 compute-0 nova_compute[189387]: 2025-11-27 00:06:19.893 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:21 compute-0 nova_compute[189387]: 2025-11-27 00:06:21.961 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:23 compute-0 podman[264781]: 2025-11-27 00:06:23.800112904 +0000 UTC m=+0.098270583 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 27 00:06:23 compute-0 podman[264797]: 2025-11-27 00:06:23.805368514 +0000 UTC m=+0.083540422 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter)
Nov 27 00:06:23 compute-0 podman[264784]: 2025-11-27 00:06:23.810228332 +0000 UTC m=+0.091202615 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 27 00:06:23 compute-0 podman[264783]: 2025-11-27 00:06:23.816628043 +0000 UTC m=+0.107413947 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 27 00:06:23 compute-0 podman[264794]: 2025-11-27 00:06:23.823627529 +0000 UTC m=+0.102078134 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Nov 27 00:06:23 compute-0 podman[264782]: 2025-11-27 00:06:23.83757956 +0000 UTC m=+0.132270308 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 27 00:06:24 compute-0 nova_compute[189387]: 2025-11-27 00:06:24.896 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:25 compute-0 nova_compute[189387]: 2025-11-27 00:06:25.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:06:26 compute-0 nova_compute[189387]: 2025-11-27 00:06:26.963 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:29 compute-0 podman[203621]: time="2025-11-27T00:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:06:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:06:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4346 "" "Go-http-client/1.1"
Nov 27 00:06:29 compute-0 nova_compute[189387]: 2025-11-27 00:06:29.901 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:31 compute-0 openstack_network_exporter[205787]: ERROR   00:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:06:31 compute-0 openstack_network_exporter[205787]: ERROR   00:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:06:31 compute-0 openstack_network_exporter[205787]: ERROR   00:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:06:31 compute-0 openstack_network_exporter[205787]: ERROR   00:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:06:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:06:31 compute-0 openstack_network_exporter[205787]: ERROR   00:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:06:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:06:31 compute-0 nova_compute[189387]: 2025-11-27 00:06:31.966 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:34 compute-0 nova_compute[189387]: 2025-11-27 00:06:34.905 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.858 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.858 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.858 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.859 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.865 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.865 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:06:36.868 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:06:37 compute-0 nova_compute[189387]: 2025-11-27 00:06:37.203 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:38 compute-0 podman[264906]: 2025-11-27 00:06:38.782719302 +0000 UTC m=+0.082898834 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 27 00:06:39 compute-0 nova_compute[189387]: 2025-11-27 00:06:39.910 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:41 compute-0 podman[264927]: 2025-11-27 00:06:41.78000127 +0000 UTC m=+0.075595640 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 27 00:06:42 compute-0 nova_compute[189387]: 2025-11-27 00:06:42.206 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:44 compute-0 nova_compute[189387]: 2025-11-27 00:06:44.913 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:47 compute-0 nova_compute[189387]: 2025-11-27 00:06:47.208 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:47 compute-0 podman[264950]: 2025-11-27 00:06:47.801854319 +0000 UTC m=+0.105435193 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:06:49 compute-0 nova_compute[189387]: 2025-11-27 00:06:49.916 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:52 compute-0 nova_compute[189387]: 2025-11-27 00:06:52.211 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:54 compute-0 podman[264973]: 2025-11-27 00:06:54.779520613 +0000 UTC m=+0.071837881 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 27 00:06:54 compute-0 podman[264981]: 2025-11-27 00:06:54.809521079 +0000 UTC m=+0.083921980 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 27 00:06:54 compute-0 podman[264992]: 2025-11-27 00:06:54.811718798 +0000 UTC m=+0.087136056 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 27 00:06:54 compute-0 podman[264971]: 2025-11-27 00:06:54.817590844 +0000 UTC m=+0.113334903 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, container_name=kepler, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 27 00:06:54 compute-0 podman[264974]: 2025-11-27 00:06:54.817651216 +0000 UTC m=+0.103154842 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 27 00:06:54 compute-0 podman[264972]: 2025-11-27 00:06:54.832373227 +0000 UTC m=+0.128010023 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125)
Nov 27 00:06:54 compute-0 nova_compute[189387]: 2025-11-27 00:06:54.918 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:57 compute-0 nova_compute[189387]: 2025-11-27 00:06:57.214 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:06:59 compute-0 podman[203621]: time="2025-11-27T00:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:06:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:06:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Nov 27 00:06:59 compute-0 nova_compute[189387]: 2025-11-27 00:06:59.923 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:01 compute-0 openstack_network_exporter[205787]: ERROR   00:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:07:01 compute-0 openstack_network_exporter[205787]: ERROR   00:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:07:01 compute-0 openstack_network_exporter[205787]: ERROR   00:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:07:01 compute-0 openstack_network_exporter[205787]: ERROR   00:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:07:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:07:01 compute-0 openstack_network_exporter[205787]: ERROR   00:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:07:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:07:02 compute-0 nova_compute[189387]: 2025-11-27 00:07:02.215 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:04 compute-0 nova_compute[189387]: 2025-11-27 00:07:04.927 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:05 compute-0 nova_compute[189387]: 2025-11-27 00:07:05.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:05 compute-0 nova_compute[189387]: 2025-11-27 00:07:05.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:07:05 compute-0 nova_compute[189387]: 2025-11-27 00:07:05.125 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:07:05 compute-0 nova_compute[189387]: 2025-11-27 00:07:05.141 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:07:05 compute-0 nova_compute[189387]: 2025-11-27 00:07:05.141 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:05 compute-0 nova_compute[189387]: 2025-11-27 00:07:05.142 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.164 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.165 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.165 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.470 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.471 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5334MB free_disk=72.29913711547852GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.471 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.471 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.559 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.560 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.586 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.612 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.614 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:07:06 compute-0 nova_compute[189387]: 2025-11-27 00:07:06.614 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:07:07 compute-0 nova_compute[189387]: 2025-11-27 00:07:07.217 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:07 compute-0 nova_compute[189387]: 2025-11-27 00:07:07.614 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:08 compute-0 nova_compute[189387]: 2025-11-27 00:07:08.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:07:09.680 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:07:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:07:09.680 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:07:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:07:09.680 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:07:09 compute-0 podman[265088]: 2025-11-27 00:07:09.764393489 +0000 UTC m=+0.065233174 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible)
Nov 27 00:07:09 compute-0 nova_compute[189387]: 2025-11-27 00:07:09.930 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:12 compute-0 nova_compute[189387]: 2025-11-27 00:07:12.218 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:12 compute-0 podman[265109]: 2025-11-27 00:07:12.768148589 +0000 UTC m=+0.066649672 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 27 00:07:13 compute-0 nova_compute[189387]: 2025-11-27 00:07:13.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:14 compute-0 nova_compute[189387]: 2025-11-27 00:07:14.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:14 compute-0 nova_compute[189387]: 2025-11-27 00:07:14.933 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:17 compute-0 nova_compute[189387]: 2025-11-27 00:07:17.221 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:18 compute-0 podman[265133]: 2025-11-27 00:07:18.801047152 +0000 UTC m=+0.093153906 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 27 00:07:19 compute-0 nova_compute[189387]: 2025-11-27 00:07:19.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:07:19 compute-0 nova_compute[189387]: 2025-11-27 00:07:19.936 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:22 compute-0 nova_compute[189387]: 2025-11-27 00:07:22.223 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:24 compute-0 nova_compute[189387]: 2025-11-27 00:07:24.939 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:25 compute-0 podman[265157]: 2025-11-27 00:07:25.799493077 +0000 UTC m=+0.077905142 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 27 00:07:25 compute-0 podman[265154]: 2025-11-27 00:07:25.824313617 +0000 UTC m=+0.103045600 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.openshift.expose-services=)
Nov 27 00:07:25 compute-0 podman[265164]: 2025-11-27 00:07:25.837930108 +0000 UTC m=+0.101659892 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, version=9.6, config_id=edpm, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Nov 27 00:07:25 compute-0 podman[265160]: 2025-11-27 00:07:25.838553626 +0000 UTC m=+0.111535667 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:07:25 compute-0 podman[265156]: 2025-11-27 00:07:25.852618669 +0000 UTC m=+0.128844546 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 27 00:07:25 compute-0 podman[265155]: 2025-11-27 00:07:25.874457109 +0000 UTC m=+0.161248257 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 27 00:07:27 compute-0 nova_compute[189387]: 2025-11-27 00:07:27.227 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:29 compute-0 podman[203621]: time="2025-11-27T00:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:07:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:07:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4347 "" "Go-http-client/1.1"
Nov 27 00:07:29 compute-0 nova_compute[189387]: 2025-11-27 00:07:29.942 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:31 compute-0 openstack_network_exporter[205787]: ERROR   00:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:07:31 compute-0 openstack_network_exporter[205787]: ERROR   00:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:07:31 compute-0 openstack_network_exporter[205787]: ERROR   00:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:07:31 compute-0 openstack_network_exporter[205787]: ERROR   00:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:07:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:07:31 compute-0 openstack_network_exporter[205787]: ERROR   00:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:07:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:07:32 compute-0 nova_compute[189387]: 2025-11-27 00:07:32.229 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:34 compute-0 nova_compute[189387]: 2025-11-27 00:07:34.945 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:37 compute-0 nova_compute[189387]: 2025-11-27 00:07:37.231 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:39 compute-0 nova_compute[189387]: 2025-11-27 00:07:39.950 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:40 compute-0 podman[265271]: 2025-11-27 00:07:40.779820874 +0000 UTC m=+0.074602084 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 27 00:07:42 compute-0 nova_compute[189387]: 2025-11-27 00:07:42.233 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:43 compute-0 podman[265292]: 2025-11-27 00:07:43.811368632 +0000 UTC m=+0.089010196 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:07:44 compute-0 nova_compute[189387]: 2025-11-27 00:07:44.953 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:47 compute-0 nova_compute[189387]: 2025-11-27 00:07:47.236 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:49 compute-0 podman[265317]: 2025-11-27 00:07:49.809495362 +0000 UTC m=+0.101604432 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 00:07:49 compute-0 nova_compute[189387]: 2025-11-27 00:07:49.957 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:52 compute-0 nova_compute[189387]: 2025-11-27 00:07:52.241 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:54 compute-0 nova_compute[189387]: 2025-11-27 00:07:54.961 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:56 compute-0 podman[265335]: 2025-11-27 00:07:56.838658243 +0000 UTC m=+0.116846036 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release-0.7.12=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, version=9.4, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 27 00:07:56 compute-0 podman[265337]: 2025-11-27 00:07:56.83891004 +0000 UTC m=+0.097054061 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 27 00:07:56 compute-0 podman[265338]: 2025-11-27 00:07:56.840831801 +0000 UTC m=+0.115616624 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Nov 27 00:07:56 compute-0 podman[265344]: 2025-11-27 00:07:56.846686847 +0000 UTC m=+0.114184305 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 27 00:07:56 compute-0 podman[265355]: 2025-11-27 00:07:56.847005655 +0000 UTC m=+0.092964531 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, architecture=x86_64, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Nov 27 00:07:56 compute-0 podman[265336]: 2025-11-27 00:07:56.852311406 +0000 UTC m=+0.138197234 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 27 00:07:57 compute-0 nova_compute[189387]: 2025-11-27 00:07:57.245 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:07:59 compute-0 podman[203621]: time="2025-11-27T00:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:07:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:07:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Nov 27 00:07:59 compute-0 nova_compute[189387]: 2025-11-27 00:07:59.966 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:01 compute-0 openstack_network_exporter[205787]: ERROR   00:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:08:01 compute-0 openstack_network_exporter[205787]: ERROR   00:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:08:01 compute-0 openstack_network_exporter[205787]: ERROR   00:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:08:01 compute-0 openstack_network_exporter[205787]: ERROR   00:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:08:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:08:01 compute-0 openstack_network_exporter[205787]: ERROR   00:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:08:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:08:02 compute-0 nova_compute[189387]: 2025-11-27 00:08:02.249 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:04 compute-0 nova_compute[189387]: 2025-11-27 00:08:04.969 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.155 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.156 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.156 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.156 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.475 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.481 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5316MB free_disk=72.29911804199219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.482 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:08:06 compute-0 nova_compute[189387]: 2025-11-27 00:08:06.483 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:08:07 compute-0 nova_compute[189387]: 2025-11-27 00:08:07.251 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:07 compute-0 nova_compute[189387]: 2025-11-27 00:08:07.322 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:08:07 compute-0 nova_compute[189387]: 2025-11-27 00:08:07.323 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:08:07 compute-0 nova_compute[189387]: 2025-11-27 00:08:07.740 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:08:07 compute-0 nova_compute[189387]: 2025-11-27 00:08:07.905 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:08:07 compute-0 nova_compute[189387]: 2025-11-27 00:08:07.907 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:08:07 compute-0 nova_compute[189387]: 2025-11-27 00:08:07.907 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:08:08 compute-0 nova_compute[189387]: 2025-11-27 00:08:08.908 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:08 compute-0 nova_compute[189387]: 2025-11-27 00:08:08.909 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:08:08 compute-0 nova_compute[189387]: 2025-11-27 00:08:08.910 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:08:08 compute-0 nova_compute[189387]: 2025-11-27 00:08:08.928 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:08:08 compute-0 nova_compute[189387]: 2025-11-27 00:08:08.929 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:08 compute-0 nova_compute[189387]: 2025-11-27 00:08:08.930 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:08 compute-0 nova_compute[189387]: 2025-11-27 00:08:08.931 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:08:09 compute-0 nova_compute[189387]: 2025-11-27 00:08:09.126 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:08:09.682 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:08:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:08:09.682 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:08:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:08:09.682 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:08:09 compute-0 nova_compute[189387]: 2025-11-27 00:08:09.973 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:11 compute-0 podman[265454]: 2025-11-27 00:08:11.796291986 +0000 UTC m=+0.081293632 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 27 00:08:12 compute-0 nova_compute[189387]: 2025-11-27 00:08:12.252 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:14 compute-0 podman[265473]: 2025-11-27 00:08:14.766162355 +0000 UTC m=+0.086054678 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 27 00:08:14 compute-0 nova_compute[189387]: 2025-11-27 00:08:14.977 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:15 compute-0 nova_compute[189387]: 2025-11-27 00:08:15.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:16 compute-0 nova_compute[189387]: 2025-11-27 00:08:16.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:17 compute-0 nova_compute[189387]: 2025-11-27 00:08:17.255 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:19 compute-0 nova_compute[189387]: 2025-11-27 00:08:19.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:19 compute-0 nova_compute[189387]: 2025-11-27 00:08:19.980 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:20 compute-0 podman[265499]: 2025-11-27 00:08:20.773517792 +0000 UTC m=+0.074465541 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 27 00:08:22 compute-0 nova_compute[189387]: 2025-11-27 00:08:22.258 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:24 compute-0 nova_compute[189387]: 2025-11-27 00:08:24.984 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:27 compute-0 nova_compute[189387]: 2025-11-27 00:08:27.260 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:27 compute-0 podman[265520]: 2025-11-27 00:08:27.818600028 +0000 UTC m=+0.117241778 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 27 00:08:27 compute-0 podman[265541]: 2025-11-27 00:08:27.821878684 +0000 UTC m=+0.097064210 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal, vendor=Red Hat, Inc.)
Nov 27 00:08:27 compute-0 podman[265523]: 2025-11-27 00:08:27.826398134 +0000 UTC m=+0.114093493 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 27 00:08:27 compute-0 podman[265522]: 2025-11-27 00:08:27.827233617 +0000 UTC m=+0.118762038 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 27 00:08:27 compute-0 podman[265521]: 2025-11-27 00:08:27.847742081 +0000 UTC m=+0.142976721 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 27 00:08:27 compute-0 podman[265533]: 2025-11-27 00:08:27.848608385 +0000 UTC m=+0.112279715 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi)
Nov 27 00:08:29 compute-0 podman[203621]: time="2025-11-27T00:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:08:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:08:29 compute-0 podman[203621]: @ - - [27/Nov/2025:00:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Nov 27 00:08:29 compute-0 nova_compute[189387]: 2025-11-27 00:08:29.987 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:30 compute-0 nova_compute[189387]: 2025-11-27 00:08:30.119 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:08:31 compute-0 openstack_network_exporter[205787]: ERROR   00:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:08:31 compute-0 openstack_network_exporter[205787]: ERROR   00:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:08:31 compute-0 openstack_network_exporter[205787]: ERROR   00:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:08:31 compute-0 openstack_network_exporter[205787]: ERROR   00:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:08:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:08:31 compute-0 openstack_network_exporter[205787]: ERROR   00:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:08:31 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:08:32 compute-0 nova_compute[189387]: 2025-11-27 00:08:32.261 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:34 compute-0 nova_compute[189387]: 2025-11-27 00:08:34.991 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.860 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.860 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.860 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f830>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f7ce544f800>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.861 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.862 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f7ce54fc050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.862 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.863 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f7ce544f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f7ce54fc0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.864 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce6613920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.865 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc140>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f7ce658e930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f7ce54fc110>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.866 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.866 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce65ba990>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.867 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc1d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f7ce856a930>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.868 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f7ce54fc1a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.868 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.868 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc260>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.869 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fa70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f7ce54fc230>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f7ce544fa40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.870 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.870 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544fad0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.871 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f7ce544faa0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.872 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f7ce54fc2c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.872 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.872 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.873 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce94d23f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f7ce54fc350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.874 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f7ce544f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.874 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.874 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc410>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.875 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f7ce54fc3e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.876 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f7ce54fc470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.876 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.876 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.877 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.877 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f7ce544f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.877 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.878 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f7ce544f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.878 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.877 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce8269670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.878 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.878 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f710>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.878 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce54fc740>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.878 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544f7d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f7ce544ffe0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f7ce5274320>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'disk.root.size': [], 'network.incoming.packets.drop': [], 'cpu': [], 'network.incoming.packets.error': [], 'disk.device.capacity': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f7ce7b465a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f7ce544f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f7ce544f6e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.879 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f7ce54fc710>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f7ce544f740>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f7ce544fb00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f7ce544f7a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f7ce544fda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f7ce6613f50>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.880 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.881 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:36 compute-0 ceilometer_agent_compute[200139]: 2025-11-27 00:08:36.882 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 27 00:08:37 compute-0 nova_compute[189387]: 2025-11-27 00:08:37.266 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:39 compute-0 nova_compute[189387]: 2025-11-27 00:08:39.995 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:42 compute-0 nova_compute[189387]: 2025-11-27 00:08:42.268 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:42 compute-0 podman[265641]: 2025-11-27 00:08:42.801060451 +0000 UTC m=+0.084118036 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 27 00:08:45 compute-0 nova_compute[189387]: 2025-11-27 00:08:45.000 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:45 compute-0 podman[265661]: 2025-11-27 00:08:45.780280919 +0000 UTC m=+0.075239121 container health_status 28f8ec2f1010e38a088569b5e9c946c151af177c13a99e8b9f072a65f0f4c897 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 27 00:08:47 compute-0 nova_compute[189387]: 2025-11-27 00:08:47.273 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:50 compute-0 nova_compute[189387]: 2025-11-27 00:08:50.004 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:51 compute-0 podman[265687]: 2025-11-27 00:08:51.823138777 +0000 UTC m=+0.114410972 container health_status bb6ef2f8ff375d4f66cf3480fcbc2b10abd6b5d102f79f6a9c59aa6482972517 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=fcb38123433469bfaad5a5f425f59527, config_id=edpm)
Nov 27 00:08:52 compute-0 nova_compute[189387]: 2025-11-27 00:08:52.276 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:55 compute-0 nova_compute[189387]: 2025-11-27 00:08:55.008 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:57 compute-0 nova_compute[189387]: 2025-11-27 00:08:57.277 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:08:58 compute-0 podman[265708]: 2025-11-27 00:08:58.796579598 +0000 UTC m=+0.086351067 container health_status 413a76e2bb8c29fc1b8d13b85f49159459dcdefeb626a3c0452bf078ffe96262 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 27 00:08:58 compute-0 podman[265720]: 2025-11-27 00:08:58.801212801 +0000 UTC m=+0.081461937 container health_status d7e7bc031ad24e55272ef2560d4fcdec7f3ac62a78a6ee37181139bb591f6c61 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 27 00:08:58 compute-0 podman[265725]: 2025-11-27 00:08:58.801923699 +0000 UTC m=+0.080230303 container health_status db7eb26fc7778fac6ff1bac50887bceb54160ba4f2877ad5d9757b69284cc5ec (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41)
Nov 27 00:08:58 compute-0 podman[265709]: 2025-11-27 00:08:58.81622436 +0000 UTC m=+0.094351700 container health_status b9ecb0f5fa461d619272c2f5ac5d8a0e2222022bcc0b80a6f5a0d90130f0b60b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 27 00:08:58 compute-0 podman[265706]: 2025-11-27 00:08:58.818742266 +0000 UTC m=+0.115281865 container health_status 331ab0fbeb7916dc04dad7742dfbe1dda21ef7a62c427a20030a9c023288f9ad (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, release-0.7.12=, distribution-scope=public, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 27 00:08:58 compute-0 podman[265707]: 2025-11-27 00:08:58.823012439 +0000 UTC m=+0.117036071 container health_status 3439983cce8d9aaa80225111d21f4ea222f68573fe48d6c20d3f0908f07e76b0 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 27 00:08:59 compute-0 podman[203621]: time="2025-11-27T00:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 27 00:08:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28289 "" "Go-http-client/1.1"
Nov 27 00:08:59 compute-0 podman[203621]: @ - - [27/Nov/2025:00:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Nov 27 00:09:00 compute-0 nova_compute[189387]: 2025-11-27 00:09:00.011 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:09:01 compute-0 openstack_network_exporter[205787]: ERROR   00:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 27 00:09:01 compute-0 openstack_network_exporter[205787]: ERROR   00:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:09:01 compute-0 openstack_network_exporter[205787]: ERROR   00:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 27 00:09:01 compute-0 openstack_network_exporter[205787]: ERROR   00:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 27 00:09:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:09:01 compute-0 openstack_network_exporter[205787]: ERROR   00:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 27 00:09:01 compute-0 openstack_network_exporter[205787]: 
Nov 27 00:09:02 compute-0 nova_compute[189387]: 2025-11-27 00:09:02.278 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:09:02 compute-0 systemd-logind[819]: New session 35 of user zuul.
Nov 27 00:09:02 compute-0 systemd[1]: Started Session 35 of User zuul.
Nov 27 00:09:05 compute-0 nova_compute[189387]: 2025-11-27 00:09:05.015 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.125 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.126 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.157 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.157 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.192 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.192 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.192 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.192 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.281 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.513 189391 WARNING nova.virt.libvirt.driver [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.514 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5219MB free_disk=72.29893112182617GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.515 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:09:07 compute-0 nova_compute[189387]: 2025-11-27 00:09:07.515 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:09:07 compute-0 ovs-vsctl[265995]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 27 00:09:08 compute-0 nova_compute[189387]: 2025-11-27 00:09:08.029 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 27 00:09:08 compute-0 nova_compute[189387]: 2025-11-27 00:09:08.029 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 27 00:09:08 compute-0 nova_compute[189387]: 2025-11-27 00:09:08.064 189391 DEBUG nova.compute.provider_tree [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed in ProviderTree for provider: de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 27 00:09:08 compute-0 nova_compute[189387]: 2025-11-27 00:09:08.081 189391 DEBUG nova.scheduler.client.report [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Inventory has not changed for provider de65df0c-bd6c-4ecc-b0a9-30ae4314ce78 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 27 00:09:08 compute-0 nova_compute[189387]: 2025-11-27 00:09:08.082 189391 DEBUG nova.compute.resource_tracker [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 27 00:09:08 compute-0 nova_compute[189387]: 2025-11-27 00:09:08.083 189391 DEBUG oslo_concurrency.lockutils [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:09:08 compute-0 virtqemud[188953]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 27 00:09:08 compute-0 virtqemud[188953]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 27 00:09:08 compute-0 virtqemud[188953]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 27 00:09:09 compute-0 nova_compute[189387]: 2025-11-27 00:09:09.077 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:09:09 compute-0 nova_compute[189387]: 2025-11-27 00:09:09.123 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:09:09 compute-0 nova_compute[189387]: 2025-11-27 00:09:09.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:09:09 compute-0 nova_compute[189387]: 2025-11-27 00:09:09.124 189391 DEBUG nova.compute.manager [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 27 00:09:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:09:09.683 106595 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 27 00:09:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:09:09.684 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 27 00:09:09 compute-0 ovn_metadata_agent[106590]: 2025-11-27 00:09:09.684 106595 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 27 00:09:10 compute-0 nova_compute[189387]: 2025-11-27 00:09:10.019 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:09:10 compute-0 nova_compute[189387]: 2025-11-27 00:09:10.124 189391 DEBUG oslo_service.periodic_task [None req-676ef9c8-394f-4229-9cd6-cf6fef768b73 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 27 00:09:12 compute-0 systemd[1]: Starting Hostname Service...
Nov 27 00:09:12 compute-0 systemd[1]: Started Hostname Service.
Nov 27 00:09:12 compute-0 nova_compute[189387]: 2025-11-27 00:09:12.283 189391 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 27 00:09:13 compute-0 podman[266661]: 2025-11-27 00:09:13.790522195 +0000 UTC m=+0.084854145 container health_status 2b636e6822498465779fa1c44958b7533e064d0c8c630f0ed1acb0bd2f99c531 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
